00:00:00.002 Started by upstream project "autotest-per-patch" build number 124217 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.109 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.600 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.612 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.623 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:07.623 > git config core.sparsecheckout # timeout=10 00:00:07.634 > git read-tree -mu HEAD # timeout=10 00:00:07.649 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:07.669 Commit message: "pool: fixes for VisualBuild class" 00:00:07.669 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:07.769 [Pipeline] Start of Pipeline 00:00:07.797 [Pipeline] library 00:00:07.803 Loading library shm_lib@master 00:00:07.804 Library shm_lib@master is cached. Copying from home. 00:00:07.843 [Pipeline] node 00:00:22.847 Still waiting to schedule task 00:00:22.847 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.847 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP11’ is offline 00:00:22.848 ‘GP12’ is offline 00:00:22.848 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP14’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP1’ is offline 00:00:22.848 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.848 ‘GP2’ is offline 00:00:22.848 ‘GP3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘GP4’ is offline 00:00:22.849 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘GP6’ is offline 00:00:22.849 ‘GP8’ is offline 00:00:22.849 ‘GP9’ is offline 00:00:22.849 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘WCP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘WCP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘WFP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.849 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP21’ is offline 00:00:22.850 ‘WFP23’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP37’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP38’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP41’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP50’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP65’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP66’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP67’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP68’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘WFP6’ is offline 00:00:22.850 ‘WFP8’ is offline 00:00:22.850 ‘WFP9’ is offline 00:00:22.850 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.850 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:05:34.249 Running on WFP20 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:34.250 [Pipeline] { 00:05:34.259 [Pipeline] catchError 00:05:34.260 [Pipeline] { 00:05:34.270 [Pipeline] wrap 00:05:34.278 [Pipeline] { 00:05:34.284 [Pipeline] stage 00:05:34.287 [Pipeline] { (Prologue) 00:05:34.445 [Pipeline] sh 00:05:34.725 + logger -p user.info -t JENKINS-CI 00:05:34.746 [Pipeline] echo 00:05:34.747 Node: WFP20 00:05:34.756 [Pipeline] sh 00:05:35.053 [Pipeline] setCustomBuildProperty 00:05:35.067 [Pipeline] echo 00:05:35.069 Cleanup processes 00:05:35.074 [Pipeline] sh 00:05:35.356 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:35.356 1092607 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:35.369 [Pipeline] sh 00:05:35.651 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:35.651 ++ grep -v 'sudo pgrep' 00:05:35.651 ++ awk '{print $1}' 00:05:35.651 + sudo kill -9 00:05:35.651 + true 00:05:35.666 [Pipeline] cleanWs 00:05:35.675 [WS-CLEANUP] Deleting project workspace... 00:05:35.675 [WS-CLEANUP] Deferred wipeout is used... 00:05:35.681 [WS-CLEANUP] done 00:05:35.685 [Pipeline] setCustomBuildProperty 00:05:35.700 [Pipeline] sh 00:05:35.981 + sudo git config --global --replace-all safe.directory '*' 00:05:36.056 [Pipeline] nodesByLabel 00:05:36.058 Found a total of 2 nodes with the 'sorcerer' label 00:05:36.068 [Pipeline] httpRequest 00:05:36.072 HttpMethod: GET 00:05:36.073 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:05:36.075 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:05:36.078 Response Code: HTTP/1.1 200 OK 00:05:36.079 Success: Status code 200 is in the accepted range: 200,404 00:05:36.080 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:05:36.222 [Pipeline] sh 00:05:36.504 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:05:36.521 [Pipeline] httpRequest 00:05:36.525 HttpMethod: GET 00:05:36.526 URL: http://10.211.164.101/packages/spdk_c5b9f923d1f02be5c638708ffd4f439a17fc435d.tar.gz 00:05:36.526 Sending request to url: http://10.211.164.101/packages/spdk_c5b9f923d1f02be5c638708ffd4f439a17fc435d.tar.gz 00:05:36.528 Response Code: HTTP/1.1 200 OK 00:05:36.528 Success: Status code 200 is in the accepted range: 200,404 00:05:36.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c5b9f923d1f02be5c638708ffd4f439a17fc435d.tar.gz 00:05:38.696 [Pipeline] sh 00:05:38.979 + tar --no-same-owner -xf spdk_c5b9f923d1f02be5c638708ffd4f439a17fc435d.tar.gz 00:05:42.278 [Pipeline] sh 00:05:42.564 + git -C spdk log --oneline -n5 00:05:42.564 c5b9f923d test/nvmf: run IO during TLS with kernel 00:05:42.564 25b1d44ec test: add a test for SPDK vs kernel TLS 00:05:42.564 7fc2ab43c scripts: add a keyctl session wrapper 00:05:42.564 00058f4d0 test/nvmf/common: do not use subnqn as model 00:05:42.564 fa40728d6 test/common: continue waitforserial on grep error 00:05:42.576 [Pipeline] } 00:05:42.595 [Pipeline] // stage 00:05:42.605 [Pipeline] stage 00:05:42.607 [Pipeline] { (Prepare) 00:05:42.630 [Pipeline] writeFile 00:05:42.651 [Pipeline] sh 00:05:42.934 + logger -p user.info -t JENKINS-CI 00:05:42.948 [Pipeline] sh 00:05:43.232 + logger -p user.info -t JENKINS-CI 00:05:43.246 [Pipeline] sh 00:05:43.529 + cat autorun-spdk.conf 00:05:43.529 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:43.529 SPDK_TEST_NVMF=1 00:05:43.529 SPDK_TEST_NVME_CLI=1 00:05:43.529 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:43.529 SPDK_TEST_NVMF_NICS=e810 00:05:43.529 SPDK_TEST_VFIOUSER=1 00:05:43.529 SPDK_RUN_UBSAN=1 00:05:43.529 NET_TYPE=phy 00:05:43.537 RUN_NIGHTLY=0 00:05:43.542 [Pipeline] readFile 00:05:43.570 [Pipeline] withEnv 00:05:43.572 [Pipeline] { 00:05:43.590 [Pipeline] sh 00:05:43.874 + set -ex 00:05:43.874 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:43.874 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:43.875 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:43.875 ++ SPDK_TEST_NVMF=1 00:05:43.875 ++ SPDK_TEST_NVME_CLI=1 00:05:43.875 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:43.875 ++ SPDK_TEST_NVMF_NICS=e810 00:05:43.875 ++ SPDK_TEST_VFIOUSER=1 00:05:43.875 ++ SPDK_RUN_UBSAN=1 00:05:43.875 ++ NET_TYPE=phy 00:05:43.875 ++ RUN_NIGHTLY=0 00:05:43.875 + case $SPDK_TEST_NVMF_NICS in 00:05:43.875 + DRIVERS=ice 00:05:43.875 + [[ tcp == \r\d\m\a ]] 00:05:43.875 + [[ -n ice ]] 00:05:43.875 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:43.875 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:43.875 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:43.875 rmmod: ERROR: Module irdma is not currently loaded 00:05:43.875 rmmod: ERROR: Module i40iw is not currently loaded 00:05:43.875 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:43.875 + true 00:05:43.875 + for D in $DRIVERS 00:05:43.875 + sudo modprobe ice 00:05:43.875 + exit 0 00:05:43.884 [Pipeline] } 00:05:43.903 [Pipeline] // withEnv 00:05:43.909 [Pipeline] } 00:05:43.927 [Pipeline] // stage 00:05:43.937 [Pipeline] catchError 00:05:43.939 [Pipeline] { 00:05:43.956 [Pipeline] timeout 00:05:43.957 Timeout set to expire in 50 min 00:05:43.958 [Pipeline] { 00:05:43.976 [Pipeline] stage 00:05:43.978 [Pipeline] { (Tests) 00:05:43.994 [Pipeline] sh 00:05:44.278 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:44.278 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:44.278 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:44.278 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:44.278 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.278 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:44.278 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:44.278 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:44.278 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:44.278 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:44.278 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:44.278 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:44.278 + source /etc/os-release 00:05:44.278 ++ NAME='Fedora Linux' 00:05:44.278 ++ VERSION='38 (Cloud Edition)' 00:05:44.278 ++ ID=fedora 00:05:44.278 ++ VERSION_ID=38 00:05:44.278 ++ VERSION_CODENAME= 00:05:44.278 ++ PLATFORM_ID=platform:f38 00:05:44.278 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:05:44.278 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:44.278 ++ LOGO=fedora-logo-icon 00:05:44.278 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:05:44.278 ++ HOME_URL=https://fedoraproject.org/ 00:05:44.278 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:05:44.278 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:44.278 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:44.278 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:44.278 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:05:44.278 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:44.278 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:05:44.278 ++ SUPPORT_END=2024-05-14 00:05:44.278 ++ VARIANT='Cloud Edition' 00:05:44.278 ++ VARIANT_ID=cloud 00:05:44.278 + uname -a 00:05:44.278 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:05:44.278 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:48.501 Hugepages 00:05:48.501 node hugesize free / total 00:05:48.501 node0 1048576kB 0 / 0 00:05:48.501 node0 2048kB 0 / 0 00:05:48.501 node1 1048576kB 0 / 0 00:05:48.501 node1 2048kB 0 / 0 00:05:48.501 00:05:48.501 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:48.501 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:48.501 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:48.501 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:48.501 + rm -f /tmp/spdk-ld-path 00:05:48.501 + source autorun-spdk.conf 00:05:48.501 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:48.501 ++ SPDK_TEST_NVMF=1 00:05:48.501 ++ SPDK_TEST_NVME_CLI=1 00:05:48.501 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:48.501 ++ SPDK_TEST_NVMF_NICS=e810 00:05:48.501 ++ SPDK_TEST_VFIOUSER=1 00:05:48.501 ++ SPDK_RUN_UBSAN=1 00:05:48.501 ++ NET_TYPE=phy 00:05:48.501 ++ RUN_NIGHTLY=0 00:05:48.502 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:48.502 + [[ -n '' ]] 00:05:48.502 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.502 + for M in /var/spdk/build-*-manifest.txt 00:05:48.502 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:48.502 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:48.502 + for M in /var/spdk/build-*-manifest.txt 00:05:48.502 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:48.502 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:48.502 ++ uname 00:05:48.502 + [[ Linux == \L\i\n\u\x ]] 00:05:48.502 + sudo dmesg -T 00:05:48.502 + sudo dmesg --clear 00:05:48.502 + dmesg_pid=1093668 00:05:48.502 + [[ Fedora Linux == FreeBSD ]] 00:05:48.502 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:48.502 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:48.502 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:48.502 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:48.502 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:48.502 + sudo dmesg -Tw 00:05:48.502 + [[ -x /usr/src/fio-static/fio ]] 00:05:48.502 + export FIO_BIN=/usr/src/fio-static/fio 00:05:48.502 + FIO_BIN=/usr/src/fio-static/fio 00:05:48.502 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:48.502 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:48.502 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:48.502 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:48.502 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:48.502 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:48.502 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:48.502 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:48.502 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:48.502 Test configuration: 00:05:48.502 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:48.502 SPDK_TEST_NVMF=1 00:05:48.502 SPDK_TEST_NVME_CLI=1 00:05:48.502 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:48.502 SPDK_TEST_NVMF_NICS=e810 00:05:48.502 SPDK_TEST_VFIOUSER=1 00:05:48.502 SPDK_RUN_UBSAN=1 00:05:48.502 NET_TYPE=phy 00:05:48.502 RUN_NIGHTLY=0 13:34:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.502 13:34:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:48.502 13:34:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.502 13:34:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.502 13:34:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.502 13:34:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.502 13:34:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.502 13:34:02 -- paths/export.sh@5 -- $ export PATH 00:05:48.502 13:34:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.502 13:34:02 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:48.502 13:34:02 -- common/autobuild_common.sh@437 -- $ date +%s 00:05:48.502 13:34:02 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718019242.XXXXXX 00:05:48.502 13:34:02 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718019242.AZnijm 00:05:48.502 13:34:02 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:05:48.502 13:34:02 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:05:48.502 13:34:02 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:48.502 13:34:02 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:48.502 13:34:02 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:48.502 13:34:02 -- common/autobuild_common.sh@453 -- $ get_config_params 00:05:48.502 13:34:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:05:48.502 13:34:02 -- common/autotest_common.sh@10 -- $ set +x 00:05:48.502 13:34:02 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:48.502 13:34:02 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:05:48.502 13:34:02 -- pm/common@17 -- $ local monitor 00:05:48.502 13:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:48.502 13:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:48.502 13:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:48.502 13:34:02 -- pm/common@21 -- $ date +%s 00:05:48.502 13:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:48.502 13:34:02 -- pm/common@21 -- $ date +%s 00:05:48.502 13:34:02 -- pm/common@25 -- $ sleep 1 00:05:48.502 13:34:02 -- pm/common@21 -- $ date +%s 00:05:48.502 13:34:02 -- pm/common@21 -- $ date +%s 00:05:48.502 13:34:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718019242 00:05:48.502 13:34:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718019242 00:05:48.502 13:34:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718019242 00:05:48.502 13:34:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718019242 00:05:48.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718019242_collect-vmstat.pm.log 00:05:48.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718019242_collect-cpu-load.pm.log 00:05:48.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718019242_collect-cpu-temp.pm.log 00:05:48.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718019242_collect-bmc-pm.bmc.pm.log 00:05:49.441 13:34:03 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:05:49.441 13:34:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:49.441 13:34:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:49.441 13:34:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:49.441 13:34:03 -- spdk/autobuild.sh@16 -- $ date -u 00:05:49.441 Mon Jun 10 11:34:03 AM UTC 2024 00:05:49.441 13:34:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:49.441 v24.09-pre-61-gc5b9f923d 00:05:49.441 13:34:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:49.441 13:34:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:49.441 13:34:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:49.441 13:34:03 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:05:49.441 13:34:03 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:05:49.441 13:34:03 -- common/autotest_common.sh@10 -- $ set +x 00:05:49.441 ************************************ 00:05:49.441 START TEST ubsan 00:05:49.441 ************************************ 00:05:49.441 13:34:03 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:05:49.441 using ubsan 00:05:49.441 00:05:49.441 real 0m0.001s 00:05:49.441 user 0m0.000s 00:05:49.441 sys 0m0.000s 00:05:49.441 13:34:03 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:05:49.441 13:34:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:49.441 ************************************ 00:05:49.441 END TEST ubsan 00:05:49.441 ************************************ 00:05:49.441 13:34:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:49.441 13:34:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:49.441 13:34:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:49.441 13:34:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:49.441 13:34:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:49.441 13:34:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:49.441 13:34:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:49.441 13:34:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:49.441 13:34:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:49.701 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:49.701 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:49.960 Using 'verbs' RDMA provider 00:06:05.783 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:20.664 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:20.664 Creating mk/config.mk...done. 00:06:20.664 Creating mk/cc.flags.mk...done. 00:06:20.664 Type 'make' to build. 00:06:20.664 13:34:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:06:20.664 13:34:33 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:06:20.664 13:34:33 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:06:20.664 13:34:33 -- common/autotest_common.sh@10 -- $ set +x 00:06:20.664 ************************************ 00:06:20.664 START TEST make 00:06:20.664 ************************************ 00:06:20.664 13:34:33 make -- common/autotest_common.sh@1124 -- $ make -j112 00:06:20.664 make[1]: Nothing to be done for 'all'. 00:06:20.921 The Meson build system 00:06:20.921 Version: 1.3.1 00:06:20.921 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:20.921 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:20.921 Build type: native build 00:06:20.921 Project name: libvfio-user 00:06:20.921 Project version: 0.0.1 00:06:20.921 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:06:20.921 C linker for the host machine: cc ld.bfd 2.39-16 00:06:20.921 Host machine cpu family: x86_64 00:06:20.921 Host machine cpu: x86_64 00:06:20.921 Run-time dependency threads found: YES 00:06:20.921 Library dl found: YES 00:06:20.921 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:06:20.921 Run-time dependency json-c found: YES 0.17 00:06:20.921 Run-time dependency cmocka found: YES 1.1.7 00:06:20.921 Program pytest-3 found: NO 00:06:20.921 Program flake8 found: NO 00:06:20.922 Program misspell-fixer found: NO 00:06:20.922 Program restructuredtext-lint found: NO 00:06:20.922 Program valgrind found: YES (/usr/bin/valgrind) 00:06:20.922 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:20.922 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:20.922 Compiler for C supports arguments -Wwrite-strings: YES 00:06:20.922 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:20.922 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:20.922 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:20.922 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:20.922 Build targets in project: 8 00:06:20.922 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:20.922 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:20.922 00:06:20.922 libvfio-user 0.0.1 00:06:20.922 00:06:20.922 User defined options 00:06:20.922 buildtype : debug 00:06:20.922 default_library: shared 00:06:20.922 libdir : /usr/local/lib 00:06:20.922 00:06:20.922 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:21.488 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:21.488 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:21.488 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:21.488 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:21.488 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:21.488 [5/37] Compiling C object samples/null.p/null.c.o 00:06:21.488 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:21.488 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:21.488 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:21.488 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:21.488 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:21.488 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:21.488 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:21.488 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:21.488 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:21.488 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:21.488 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:21.488 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:21.488 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:21.488 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:21.488 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:21.488 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:21.488 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:21.488 [23/37] Compiling C object samples/server.p/server.c.o 00:06:21.488 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:21.488 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:21.746 [26/37] Compiling C object samples/client.p/client.c.o 00:06:21.746 [27/37] Linking target samples/client 00:06:21.746 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:21.746 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:21.746 [30/37] Linking target test/unit_tests 00:06:21.746 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:06:22.004 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:22.004 [33/37] Linking target samples/server 00:06:22.004 [34/37] Linking target samples/gpio-pci-idio-16 00:06:22.004 [35/37] Linking target samples/lspci 00:06:22.004 [36/37] Linking target samples/null 00:06:22.004 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:22.004 INFO: autodetecting backend as ninja 00:06:22.005 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:22.005 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:22.262 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:22.262 ninja: no work to do. 00:06:28.827 The Meson build system 00:06:28.827 Version: 1.3.1 00:06:28.827 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:28.827 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:28.827 Build type: native build 00:06:28.827 Program cat found: YES (/usr/bin/cat) 00:06:28.827 Project name: DPDK 00:06:28.827 Project version: 24.03.0 00:06:28.827 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:06:28.827 C linker for the host machine: cc ld.bfd 2.39-16 00:06:28.827 Host machine cpu family: x86_64 00:06:28.827 Host machine cpu: x86_64 00:06:28.827 Message: ## Building in Developer Mode ## 00:06:28.827 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:28.827 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:28.827 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:28.827 Program python3 found: YES (/usr/bin/python3) 00:06:28.827 Program cat found: YES (/usr/bin/cat) 00:06:28.827 Compiler for C supports arguments -march=native: YES 00:06:28.827 Checking for size of "void *" : 8 00:06:28.827 Checking for size of "void *" : 8 (cached) 00:06:28.827 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:06:28.827 Library m found: YES 00:06:28.827 Library numa found: YES 00:06:28.827 Has header "numaif.h" : YES 00:06:28.827 Library fdt found: NO 00:06:28.827 Library execinfo found: NO 00:06:28.827 Has header "execinfo.h" : YES 00:06:28.827 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:06:28.827 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:28.827 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:28.827 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:28.827 Run-time dependency openssl found: YES 3.0.9 00:06:28.827 Run-time dependency libpcap found: YES 1.10.4 00:06:28.827 Has header "pcap.h" with dependency libpcap: YES 00:06:28.827 Compiler for C supports arguments -Wcast-qual: YES 00:06:28.827 Compiler for C supports arguments -Wdeprecated: YES 00:06:28.827 Compiler for C supports arguments -Wformat: YES 00:06:28.827 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:28.827 Compiler for C supports arguments -Wformat-security: NO 00:06:28.827 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:28.827 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:28.827 Compiler for C supports arguments -Wnested-externs: YES 00:06:28.827 Compiler for C supports arguments -Wold-style-definition: YES 00:06:28.827 Compiler for C supports arguments -Wpointer-arith: YES 00:06:28.827 Compiler for C supports arguments -Wsign-compare: YES 00:06:28.827 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:28.827 Compiler for C supports arguments -Wundef: YES 00:06:28.827 Compiler for C supports arguments -Wwrite-strings: YES 00:06:28.827 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:28.827 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:28.827 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:28.827 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:28.827 Program objdump found: YES (/usr/bin/objdump) 00:06:28.827 Compiler for C supports arguments -mavx512f: YES 00:06:28.827 Checking if "AVX512 checking" compiles: YES 00:06:28.827 Fetching value of define "__SSE4_2__" : 1 00:06:28.827 Fetching value of define "__AES__" : 1 00:06:28.827 Fetching value of define "__AVX__" : 1 00:06:28.827 Fetching value of define "__AVX2__" : 1 00:06:28.827 Fetching value of define "__AVX512BW__" : 1 00:06:28.827 Fetching value of define "__AVX512CD__" : 1 00:06:28.827 Fetching value of define "__AVX512DQ__" : 1 00:06:28.827 Fetching value of define "__AVX512F__" : 1 00:06:28.827 Fetching value of define "__AVX512VL__" : 1 00:06:28.827 Fetching value of define "__PCLMUL__" : 1 00:06:28.827 Fetching value of define "__RDRND__" : 1 00:06:28.827 Fetching value of define "__RDSEED__" : 1 00:06:28.827 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:28.827 Fetching value of define "__znver1__" : (undefined) 00:06:28.827 Fetching value of define "__znver2__" : (undefined) 00:06:28.827 Fetching value of define "__znver3__" : (undefined) 00:06:28.827 Fetching value of define "__znver4__" : (undefined) 00:06:28.827 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:28.827 Message: lib/log: Defining dependency "log" 00:06:28.827 Message: lib/kvargs: Defining dependency "kvargs" 00:06:28.827 Message: lib/telemetry: Defining dependency "telemetry" 00:06:28.827 Checking for function "getentropy" : NO 00:06:28.828 Message: lib/eal: Defining dependency "eal" 00:06:28.828 Message: lib/ring: Defining dependency "ring" 00:06:28.828 Message: lib/rcu: Defining dependency "rcu" 00:06:28.828 Message: lib/mempool: Defining dependency "mempool" 00:06:28.828 Message: lib/mbuf: Defining dependency "mbuf" 00:06:28.828 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:28.828 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:28.828 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:28.828 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:28.828 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:28.828 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:28.828 Compiler for C supports arguments -mpclmul: YES 00:06:28.828 Compiler for C supports arguments -maes: YES 00:06:28.828 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:28.828 Compiler for C supports arguments -mavx512bw: YES 00:06:28.828 Compiler for C supports arguments -mavx512dq: YES 00:06:28.828 Compiler for C supports arguments -mavx512vl: YES 00:06:28.828 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:28.828 Compiler for C supports arguments -mavx2: YES 00:06:28.828 Compiler for C supports arguments -mavx: YES 00:06:28.828 Message: lib/net: Defining dependency "net" 00:06:28.828 Message: lib/meter: Defining dependency "meter" 00:06:28.828 Message: lib/ethdev: Defining dependency "ethdev" 00:06:28.828 Message: lib/pci: Defining dependency "pci" 00:06:28.828 Message: lib/cmdline: Defining dependency "cmdline" 00:06:28.828 Message: lib/hash: Defining dependency "hash" 00:06:28.828 Message: lib/timer: Defining dependency "timer" 00:06:28.828 Message: lib/compressdev: Defining dependency "compressdev" 00:06:28.828 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:28.828 Message: lib/dmadev: Defining dependency "dmadev" 00:06:28.828 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:28.828 Message: lib/power: Defining dependency "power" 00:06:28.828 Message: lib/reorder: Defining dependency "reorder" 00:06:28.828 Message: lib/security: Defining dependency "security" 00:06:28.828 Has header "linux/userfaultfd.h" : YES 00:06:28.828 Has header "linux/vduse.h" : YES 00:06:28.828 Message: lib/vhost: Defining dependency "vhost" 00:06:28.828 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:28.828 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:28.828 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:28.828 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:28.828 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:28.828 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:28.828 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:28.828 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:28.828 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:28.828 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:28.828 Program doxygen found: YES (/usr/bin/doxygen) 00:06:28.828 Configuring doxy-api-html.conf using configuration 00:06:28.828 Configuring doxy-api-man.conf using configuration 00:06:28.828 Program mandb found: YES (/usr/bin/mandb) 00:06:28.828 Program sphinx-build found: NO 00:06:28.828 Configuring rte_build_config.h using configuration 00:06:28.828 Message: 00:06:28.828 ================= 00:06:28.828 Applications Enabled 00:06:28.828 ================= 00:06:28.828 00:06:28.828 apps: 00:06:28.828 00:06:28.828 00:06:28.828 Message: 00:06:28.828 ================= 00:06:28.828 Libraries Enabled 00:06:28.828 ================= 00:06:28.828 00:06:28.828 libs: 00:06:28.828 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:28.828 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:28.828 cryptodev, dmadev, power, reorder, security, vhost, 00:06:28.828 00:06:28.828 Message: 00:06:28.828 =============== 00:06:28.828 Drivers Enabled 00:06:28.828 =============== 00:06:28.828 00:06:28.828 common: 00:06:28.828 00:06:28.828 bus: 00:06:28.828 pci, vdev, 00:06:28.828 mempool: 00:06:28.828 ring, 00:06:28.828 dma: 00:06:28.828 00:06:28.828 net: 00:06:28.828 00:06:28.828 crypto: 00:06:28.828 00:06:28.828 compress: 00:06:28.828 00:06:28.828 vdpa: 00:06:28.828 00:06:28.828 00:06:28.828 Message: 00:06:28.828 ================= 00:06:28.828 Content Skipped 00:06:28.828 ================= 00:06:28.828 00:06:28.828 apps: 00:06:28.828 dumpcap: explicitly disabled via build config 00:06:28.828 graph: explicitly disabled via build config 00:06:28.828 pdump: explicitly disabled via build config 00:06:28.828 proc-info: explicitly disabled via build config 00:06:28.828 test-acl: explicitly disabled via build config 00:06:28.828 test-bbdev: explicitly disabled via build config 00:06:28.828 test-cmdline: explicitly disabled via build config 00:06:28.828 test-compress-perf: explicitly disabled via build config 00:06:28.828 test-crypto-perf: explicitly disabled via build config 00:06:28.828 test-dma-perf: explicitly disabled via build config 00:06:28.828 test-eventdev: explicitly disabled via build config 00:06:28.828 test-fib: explicitly disabled via build config 00:06:28.828 test-flow-perf: explicitly disabled via build config 00:06:28.828 test-gpudev: explicitly disabled via build config 00:06:28.828 test-mldev: explicitly disabled via build config 00:06:28.828 test-pipeline: explicitly disabled via build config 00:06:28.828 test-pmd: explicitly disabled via build config 00:06:28.828 test-regex: explicitly disabled via build config 00:06:28.828 test-sad: explicitly disabled via build config 00:06:28.828 test-security-perf: explicitly disabled via build config 00:06:28.828 00:06:28.828 libs: 00:06:28.828 argparse: explicitly disabled via build config 00:06:28.828 metrics: explicitly disabled via build config 00:06:28.828 acl: explicitly disabled via build config 00:06:28.828 bbdev: explicitly disabled via build config 00:06:28.828 bitratestats: explicitly disabled via build config 00:06:28.828 bpf: explicitly disabled via build config 00:06:28.828 cfgfile: explicitly disabled via build config 00:06:28.828 distributor: explicitly disabled via build config 00:06:28.828 efd: explicitly disabled via build config 00:06:28.828 eventdev: explicitly disabled via build config 00:06:28.828 dispatcher: explicitly disabled via build config 00:06:28.828 gpudev: explicitly disabled via build config 00:06:28.828 gro: explicitly disabled via build config 00:06:28.828 gso: explicitly disabled via build config 00:06:28.828 ip_frag: explicitly disabled via build config 00:06:28.828 jobstats: explicitly disabled via build config 00:06:28.828 latencystats: explicitly disabled via build config 00:06:28.828 lpm: explicitly disabled via build config 00:06:28.828 member: explicitly disabled via build config 00:06:28.828 pcapng: explicitly disabled via build config 00:06:28.828 rawdev: explicitly disabled via build config 00:06:28.828 regexdev: explicitly disabled via build config 00:06:28.828 mldev: explicitly disabled via build config 00:06:28.828 rib: explicitly disabled via build config 00:06:28.828 sched: explicitly disabled via build config 00:06:28.828 stack: explicitly disabled via build config 00:06:28.828 ipsec: explicitly disabled via build config 00:06:28.828 pdcp: explicitly disabled via build config 00:06:28.828 fib: explicitly disabled via build config 00:06:28.828 port: explicitly disabled via build config 00:06:28.828 pdump: explicitly disabled via build config 00:06:28.828 table: explicitly disabled via build config 00:06:28.828 pipeline: explicitly disabled via build config 00:06:28.828 graph: explicitly disabled via build config 00:06:28.828 node: explicitly disabled via build config 00:06:28.828 00:06:28.828 drivers: 00:06:28.828 common/cpt: not in enabled drivers build config 00:06:28.828 common/dpaax: not in enabled drivers build config 00:06:28.828 common/iavf: not in enabled drivers build config 00:06:28.828 common/idpf: not in enabled drivers build config 00:06:28.828 common/ionic: not in enabled drivers build config 00:06:28.828 common/mvep: not in enabled drivers build config 00:06:28.828 common/octeontx: not in enabled drivers build config 00:06:28.828 bus/auxiliary: not in enabled drivers build config 00:06:28.828 bus/cdx: not in enabled drivers build config 00:06:28.828 bus/dpaa: not in enabled drivers build config 00:06:28.828 bus/fslmc: not in enabled drivers build config 00:06:28.828 bus/ifpga: not in enabled drivers build config 00:06:28.828 bus/platform: not in enabled drivers build config 00:06:28.828 bus/uacce: not in enabled drivers build config 00:06:28.828 bus/vmbus: not in enabled drivers build config 00:06:28.828 common/cnxk: not in enabled drivers build config 00:06:28.828 common/mlx5: not in enabled drivers build config 00:06:28.828 common/nfp: not in enabled drivers build config 00:06:28.828 common/nitrox: not in enabled drivers build config 00:06:28.828 common/qat: not in enabled drivers build config 00:06:28.828 common/sfc_efx: not in enabled drivers build config 00:06:28.828 mempool/bucket: not in enabled drivers build config 00:06:28.828 mempool/cnxk: not in enabled drivers build config 00:06:28.828 mempool/dpaa: not in enabled drivers build config 00:06:28.828 mempool/dpaa2: not in enabled drivers build config 00:06:28.828 mempool/octeontx: not in enabled drivers build config 00:06:28.828 mempool/stack: not in enabled drivers build config 00:06:28.828 dma/cnxk: not in enabled drivers build config 00:06:28.828 dma/dpaa: not in enabled drivers build config 00:06:28.828 dma/dpaa2: not in enabled drivers build config 00:06:28.828 dma/hisilicon: not in enabled drivers build config 00:06:28.828 dma/idxd: not in enabled drivers build config 00:06:28.828 dma/ioat: not in enabled drivers build config 00:06:28.828 dma/skeleton: not in enabled drivers build config 00:06:28.828 net/af_packet: not in enabled drivers build config 00:06:28.828 net/af_xdp: not in enabled drivers build config 00:06:28.828 net/ark: not in enabled drivers build config 00:06:28.828 net/atlantic: not in enabled drivers build config 00:06:28.828 net/avp: not in enabled drivers build config 00:06:28.828 net/axgbe: not in enabled drivers build config 00:06:28.828 net/bnx2x: not in enabled drivers build config 00:06:28.828 net/bnxt: not in enabled drivers build config 00:06:28.828 net/bonding: not in enabled drivers build config 00:06:28.828 net/cnxk: not in enabled drivers build config 00:06:28.828 net/cpfl: not in enabled drivers build config 00:06:28.829 net/cxgbe: not in enabled drivers build config 00:06:28.829 net/dpaa: not in enabled drivers build config 00:06:28.829 net/dpaa2: not in enabled drivers build config 00:06:28.829 net/e1000: not in enabled drivers build config 00:06:28.829 net/ena: not in enabled drivers build config 00:06:28.829 net/enetc: not in enabled drivers build config 00:06:28.829 net/enetfec: not in enabled drivers build config 00:06:28.829 net/enic: not in enabled drivers build config 00:06:28.829 net/failsafe: not in enabled drivers build config 00:06:28.829 net/fm10k: not in enabled drivers build config 00:06:28.829 net/gve: not in enabled drivers build config 00:06:28.829 net/hinic: not in enabled drivers build config 00:06:28.829 net/hns3: not in enabled drivers build config 00:06:28.829 net/i40e: not in enabled drivers build config 00:06:28.829 net/iavf: not in enabled drivers build config 00:06:28.829 net/ice: not in enabled drivers build config 00:06:28.829 net/idpf: not in enabled drivers build config 00:06:28.829 net/igc: not in enabled drivers build config 00:06:28.829 net/ionic: not in enabled drivers build config 00:06:28.829 net/ipn3ke: not in enabled drivers build config 00:06:28.829 net/ixgbe: not in enabled drivers build config 00:06:28.829 net/mana: not in enabled drivers build config 00:06:28.829 net/memif: not in enabled drivers build config 00:06:28.829 net/mlx4: not in enabled drivers build config 00:06:28.829 net/mlx5: not in enabled drivers build config 00:06:28.829 net/mvneta: not in enabled drivers build config 00:06:28.829 net/mvpp2: not in enabled drivers build config 00:06:28.829 net/netvsc: not in enabled drivers build config 00:06:28.829 net/nfb: not in enabled drivers build config 00:06:28.829 net/nfp: not in enabled drivers build config 00:06:28.829 net/ngbe: not in enabled drivers build config 00:06:28.829 net/null: not in enabled drivers build config 00:06:28.829 net/octeontx: not in enabled drivers build config 00:06:28.829 net/octeon_ep: not in enabled drivers build config 00:06:28.829 net/pcap: not in enabled drivers build config 00:06:28.829 net/pfe: not in enabled drivers build config 00:06:28.829 net/qede: not in enabled drivers build config 00:06:28.829 net/ring: not in enabled drivers build config 00:06:28.829 net/sfc: not in enabled drivers build config 00:06:28.829 net/softnic: not in enabled drivers build config 00:06:28.829 net/tap: not in enabled drivers build config 00:06:28.829 net/thunderx: not in enabled drivers build config 00:06:28.829 net/txgbe: not in enabled drivers build config 00:06:28.829 net/vdev_netvsc: not in enabled drivers build config 00:06:28.829 net/vhost: not in enabled drivers build config 00:06:28.829 net/virtio: not in enabled drivers build config 00:06:28.829 net/vmxnet3: not in enabled drivers build config 00:06:28.829 raw/*: missing internal dependency, "rawdev" 00:06:28.829 crypto/armv8: not in enabled drivers build config 00:06:28.829 crypto/bcmfs: not in enabled drivers build config 00:06:28.829 crypto/caam_jr: not in enabled drivers build config 00:06:28.829 crypto/ccp: not in enabled drivers build config 00:06:28.829 crypto/cnxk: not in enabled drivers build config 00:06:28.829 crypto/dpaa_sec: not in enabled drivers build config 00:06:28.829 crypto/dpaa2_sec: not in enabled drivers build config 00:06:28.829 crypto/ipsec_mb: not in enabled drivers build config 00:06:28.829 crypto/mlx5: not in enabled drivers build config 00:06:28.829 crypto/mvsam: not in enabled drivers build config 00:06:28.829 crypto/nitrox: not in enabled drivers build config 00:06:28.829 crypto/null: not in enabled drivers build config 00:06:28.829 crypto/octeontx: not in enabled drivers build config 00:06:28.829 crypto/openssl: not in enabled drivers build config 00:06:28.829 crypto/scheduler: not in enabled drivers build config 00:06:28.829 crypto/uadk: not in enabled drivers build config 00:06:28.829 crypto/virtio: not in enabled drivers build config 00:06:28.829 compress/isal: not in enabled drivers build config 00:06:28.829 compress/mlx5: not in enabled drivers build config 00:06:28.829 compress/nitrox: not in enabled drivers build config 00:06:28.829 compress/octeontx: not in enabled drivers build config 00:06:28.829 compress/zlib: not in enabled drivers build config 00:06:28.829 regex/*: missing internal dependency, "regexdev" 00:06:28.829 ml/*: missing internal dependency, "mldev" 00:06:28.829 vdpa/ifc: not in enabled drivers build config 00:06:28.829 vdpa/mlx5: not in enabled drivers build config 00:06:28.829 vdpa/nfp: not in enabled drivers build config 00:06:28.829 vdpa/sfc: not in enabled drivers build config 00:06:28.829 event/*: missing internal dependency, "eventdev" 00:06:28.829 baseband/*: missing internal dependency, "bbdev" 00:06:28.829 gpu/*: missing internal dependency, "gpudev" 00:06:28.829 00:06:28.829 00:06:28.829 Build targets in project: 85 00:06:28.829 00:06:28.829 DPDK 24.03.0 00:06:28.829 00:06:28.829 User defined options 00:06:28.829 buildtype : debug 00:06:28.829 default_library : shared 00:06:28.829 libdir : lib 00:06:28.829 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:28.829 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:28.829 c_link_args : 00:06:28.829 cpu_instruction_set: native 00:06:28.829 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:06:28.829 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:06:28.829 enable_docs : false 00:06:28.829 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:28.829 enable_kmods : false 00:06:28.829 tests : false 00:06:28.829 00:06:28.829 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:29.099 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:29.099 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:29.099 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:29.099 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:29.363 [4/268] Linking static target lib/librte_kvargs.a 00:06:29.363 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:29.363 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:29.363 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:29.363 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:29.363 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:29.363 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:29.363 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:29.363 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:29.363 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:29.363 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:29.363 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:29.363 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:29.363 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:29.363 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:29.363 [19/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:29.363 [20/268] Linking static target lib/librte_log.a 00:06:29.363 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:29.363 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:29.363 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:29.363 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:29.363 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:29.363 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:29.363 [27/268] Linking static target lib/librte_pci.a 00:06:29.363 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:29.624 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:29.624 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:29.624 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:29.624 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:29.624 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:29.624 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:29.624 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:29.884 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:29.884 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:29.884 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:29.884 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:29.884 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:29.884 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:29.884 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:29.884 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:29.884 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:29.884 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:29.884 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:29.884 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:29.884 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:29.884 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:29.884 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:29.884 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:29.884 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:29.884 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:29.884 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:29.884 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:29.884 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:29.884 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:29.884 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:29.884 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:29.884 [60/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:29.884 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:29.884 [62/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:29.884 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:29.884 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:29.884 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:29.884 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:29.884 [67/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:29.884 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:29.884 [69/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:29.884 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:29.884 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:29.884 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:29.884 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:29.884 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:29.884 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:29.884 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:29.884 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:29.884 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:29.884 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:29.884 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:29.884 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:29.884 [82/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:29.884 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:29.884 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:29.884 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:29.884 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:29.884 [87/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:29.884 [88/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:29.884 [89/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:29.884 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:29.884 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:29.884 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:29.884 [93/268] Linking static target lib/librte_meter.a 00:06:29.884 [94/268] Linking static target lib/librte_ring.a 00:06:29.884 [95/268] Linking static target lib/librte_telemetry.a 00:06:29.884 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:29.884 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:29.884 [98/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:29.884 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:29.884 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:29.884 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:29.884 [102/268] Linking static target lib/librte_mempool.a 00:06:29.884 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:29.884 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:29.884 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:29.884 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:29.884 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:29.884 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:29.884 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:29.884 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:29.884 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:29.884 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:30.143 [113/268] Linking static target lib/librte_net.a 00:06:30.143 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:30.143 [115/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:30.143 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:30.143 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:30.143 [118/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:30.143 [119/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:30.143 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:30.143 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:30.143 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:30.143 [123/268] Linking static target lib/librte_cmdline.a 00:06:30.143 [124/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:30.143 [125/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:30.143 [126/268] Linking static target lib/librte_timer.a 00:06:30.143 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:30.143 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:30.143 [129/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:30.143 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:30.143 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:30.143 [132/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:30.143 [133/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:30.143 [134/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:30.143 [135/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:30.143 [136/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:30.143 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:30.143 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:30.143 [139/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:30.143 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:30.143 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:30.143 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:30.143 [143/268] Linking static target lib/librte_rcu.a 00:06:30.143 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:30.143 [145/268] Linking static target lib/librte_eal.a 00:06:30.143 [146/268] Linking static target lib/librte_compressdev.a 00:06:30.143 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:30.143 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:30.143 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:30.143 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:30.143 [151/268] Linking static target lib/librte_dmadev.a 00:06:30.143 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:30.143 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.143 [154/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.143 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:30.143 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:30.143 [157/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:30.143 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:30.400 [159/268] Linking target lib/librte_log.so.24.1 00:06:30.400 [160/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:30.400 [161/268] Linking static target lib/librte_mbuf.a 00:06:30.400 [162/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.400 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:30.400 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:30.400 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:30.400 [166/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:30.401 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:30.401 [168/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.401 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:30.401 [170/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:30.401 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:30.401 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:30.401 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:30.401 [174/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:30.401 [175/268] Linking static target lib/librte_power.a 00:06:30.401 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:30.401 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:30.401 [178/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:30.401 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:30.401 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:30.401 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:30.401 [182/268] Linking static target lib/librte_hash.a 00:06:30.401 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:30.401 [184/268] Linking static target lib/librte_security.a 00:06:30.401 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:30.401 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:30.401 [187/268] Linking static target lib/librte_reorder.a 00:06:30.401 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:30.401 [189/268] Linking target lib/librte_kvargs.so.24.1 00:06:30.401 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:30.660 [191/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.660 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:30.660 [193/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.660 [194/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:30.660 [195/268] Linking static target lib/librte_cryptodev.a 00:06:30.660 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:30.660 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:30.660 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:30.660 [199/268] Linking static target drivers/librte_bus_vdev.a 00:06:30.660 [200/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.660 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:30.660 [202/268] Linking target lib/librte_telemetry.so.24.1 00:06:30.660 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:30.660 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:30.660 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:30.660 [206/268] Linking static target drivers/librte_mempool_ring.a 00:06:30.660 [207/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:30.660 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:30.660 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:30.660 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:30.660 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:30.660 [212/268] Linking static target drivers/librte_bus_pci.a 00:06:30.660 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:30.918 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.918 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.918 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.918 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.918 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.918 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:31.200 [220/268] Linking static target lib/librte_ethdev.a 00:06:31.200 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.200 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.200 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:31.460 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.460 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.460 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.719 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.656 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:32.656 [229/268] Linking static target lib/librte_vhost.a 00:06:32.914 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.820 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.392 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.297 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.297 [234/268] Linking target lib/librte_eal.so.24.1 00:06:43.609 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:43.609 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:43.609 [237/268] Linking target lib/librte_ring.so.24.1 00:06:43.609 [238/268] Linking target lib/librte_dmadev.so.24.1 00:06:43.609 [239/268] Linking target lib/librte_timer.so.24.1 00:06:43.609 [240/268] Linking target lib/librte_pci.so.24.1 00:06:43.609 [241/268] Linking target lib/librte_meter.so.24.1 00:06:43.609 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:43.609 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:43.609 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:43.609 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:43.609 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:43.883 [247/268] Linking target lib/librte_rcu.so.24.1 00:06:43.883 [248/268] Linking target lib/librte_mempool.so.24.1 00:06:43.883 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:43.883 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:43.883 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:43.883 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:43.883 [253/268] Linking target lib/librte_mbuf.so.24.1 00:06:44.141 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:44.141 [255/268] Linking target lib/librte_reorder.so.24.1 00:06:44.141 [256/268] Linking target lib/librte_net.so.24.1 00:06:44.141 [257/268] Linking target lib/librte_compressdev.so.24.1 00:06:44.141 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:06:44.400 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:44.400 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:44.400 [261/268] Linking target lib/librte_cmdline.so.24.1 00:06:44.400 [262/268] Linking target lib/librte_hash.so.24.1 00:06:44.400 [263/268] Linking target lib/librte_ethdev.so.24.1 00:06:44.400 [264/268] Linking target lib/librte_security.so.24.1 00:06:44.400 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:44.400 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:44.659 [267/268] Linking target lib/librte_power.so.24.1 00:06:44.659 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:44.659 INFO: autodetecting backend as ninja 00:06:44.659 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:06:45.596 CC lib/ut/ut.o 00:06:45.596 CC lib/log/log.o 00:06:45.596 CC lib/log/log_flags.o 00:06:45.596 CC lib/log/log_deprecated.o 00:06:45.596 CC lib/ut_mock/mock.o 00:06:45.856 LIB libspdk_ut.a 00:06:45.856 SO libspdk_ut.so.2.0 00:06:45.856 LIB libspdk_log.a 00:06:45.856 LIB libspdk_ut_mock.a 00:06:45.856 SO libspdk_log.so.7.0 00:06:45.856 SO libspdk_ut_mock.so.6.0 00:06:45.856 SYMLINK libspdk_ut.so 00:06:46.115 SYMLINK libspdk_ut_mock.so 00:06:46.115 SYMLINK libspdk_log.so 00:06:46.374 CC lib/dma/dma.o 00:06:46.374 CC lib/util/base64.o 00:06:46.374 CC lib/util/bit_array.o 00:06:46.374 CC lib/util/crc32.o 00:06:46.374 CC lib/util/cpuset.o 00:06:46.374 CC lib/util/crc16.o 00:06:46.374 CXX lib/trace_parser/trace.o 00:06:46.374 CC lib/util/crc32c.o 00:06:46.374 CC lib/util/crc32_ieee.o 00:06:46.374 CC lib/util/crc64.o 00:06:46.374 CC lib/util/dif.o 00:06:46.374 CC lib/util/fd.o 00:06:46.374 CC lib/util/file.o 00:06:46.374 CC lib/util/hexlify.o 00:06:46.374 CC lib/ioat/ioat.o 00:06:46.374 CC lib/util/iov.o 00:06:46.374 CC lib/util/math.o 00:06:46.374 CC lib/util/pipe.o 00:06:46.374 CC lib/util/strerror_tls.o 00:06:46.374 CC lib/util/string.o 00:06:46.374 CC lib/util/uuid.o 00:06:46.374 CC lib/util/fd_group.o 00:06:46.374 CC lib/util/xor.o 00:06:46.374 CC lib/util/zipf.o 00:06:46.634 CC lib/vfio_user/host/vfio_user.o 00:06:46.634 CC lib/vfio_user/host/vfio_user_pci.o 00:06:46.634 LIB libspdk_dma.a 00:06:46.634 SO libspdk_dma.so.4.0 00:06:46.634 LIB libspdk_ioat.a 00:06:46.634 SYMLINK libspdk_dma.so 00:06:46.634 SO libspdk_ioat.so.7.0 00:06:46.892 SYMLINK libspdk_ioat.so 00:06:46.892 LIB libspdk_vfio_user.a 00:06:46.892 SO libspdk_vfio_user.so.5.0 00:06:46.892 LIB libspdk_util.a 00:06:46.893 SYMLINK libspdk_vfio_user.so 00:06:46.893 SO libspdk_util.so.9.0 00:06:47.151 SYMLINK libspdk_util.so 00:06:47.151 LIB libspdk_trace_parser.a 00:06:47.410 SO libspdk_trace_parser.so.5.0 00:06:47.410 SYMLINK libspdk_trace_parser.so 00:06:47.668 CC lib/json/json_parse.o 00:06:47.668 CC lib/json/json_util.o 00:06:47.668 CC lib/json/json_write.o 00:06:47.668 CC lib/vmd/vmd.o 00:06:47.668 CC lib/vmd/led.o 00:06:47.668 CC lib/conf/conf.o 00:06:47.668 CC lib/rdma/common.o 00:06:47.668 CC lib/rdma/rdma_verbs.o 00:06:47.668 CC lib/idxd/idxd.o 00:06:47.668 CC lib/env_dpdk/env.o 00:06:47.668 CC lib/idxd/idxd_user.o 00:06:47.668 CC lib/env_dpdk/memory.o 00:06:47.668 CC lib/idxd/idxd_kernel.o 00:06:47.668 CC lib/env_dpdk/pci.o 00:06:47.668 CC lib/env_dpdk/init.o 00:06:47.668 CC lib/env_dpdk/threads.o 00:06:47.668 CC lib/env_dpdk/pci_ioat.o 00:06:47.668 CC lib/env_dpdk/pci_virtio.o 00:06:47.668 CC lib/env_dpdk/pci_vmd.o 00:06:47.668 CC lib/env_dpdk/pci_idxd.o 00:06:47.668 CC lib/env_dpdk/pci_event.o 00:06:47.668 CC lib/env_dpdk/sigbus_handler.o 00:06:47.668 CC lib/env_dpdk/pci_dpdk.o 00:06:47.668 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:47.668 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:47.927 LIB libspdk_conf.a 00:06:47.927 SO libspdk_conf.so.6.0 00:06:47.927 LIB libspdk_json.a 00:06:47.927 LIB libspdk_rdma.a 00:06:47.927 SO libspdk_json.so.6.0 00:06:47.927 SO libspdk_rdma.so.6.0 00:06:47.927 SYMLINK libspdk_conf.so 00:06:47.927 SYMLINK libspdk_json.so 00:06:47.927 SYMLINK libspdk_rdma.so 00:06:48.185 LIB libspdk_idxd.a 00:06:48.185 SO libspdk_idxd.so.12.0 00:06:48.185 LIB libspdk_vmd.a 00:06:48.185 SO libspdk_vmd.so.6.0 00:06:48.185 SYMLINK libspdk_idxd.so 00:06:48.185 SYMLINK libspdk_vmd.so 00:06:48.444 CC lib/jsonrpc/jsonrpc_server.o 00:06:48.444 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:48.444 CC lib/jsonrpc/jsonrpc_client.o 00:06:48.444 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:48.703 LIB libspdk_jsonrpc.a 00:06:48.703 SO libspdk_jsonrpc.so.6.0 00:06:48.703 SYMLINK libspdk_jsonrpc.so 00:06:48.961 LIB libspdk_env_dpdk.a 00:06:48.961 SO libspdk_env_dpdk.so.14.0 00:06:49.219 SYMLINK libspdk_env_dpdk.so 00:06:49.219 CC lib/rpc/rpc.o 00:06:49.477 LIB libspdk_rpc.a 00:06:49.477 SO libspdk_rpc.so.6.0 00:06:49.477 SYMLINK libspdk_rpc.so 00:06:49.735 CC lib/keyring/keyring.o 00:06:49.735 CC lib/keyring/keyring_rpc.o 00:06:49.735 CC lib/trace/trace.o 00:06:49.735 CC lib/notify/notify.o 00:06:49.735 CC lib/trace/trace_flags.o 00:06:49.735 CC lib/notify/notify_rpc.o 00:06:49.735 CC lib/trace/trace_rpc.o 00:06:49.993 LIB libspdk_notify.a 00:06:49.993 LIB libspdk_keyring.a 00:06:49.993 SO libspdk_notify.so.6.0 00:06:49.993 LIB libspdk_trace.a 00:06:49.993 SO libspdk_keyring.so.1.0 00:06:50.252 SYMLINK libspdk_notify.so 00:06:50.252 SO libspdk_trace.so.10.0 00:06:50.252 SYMLINK libspdk_keyring.so 00:06:50.252 SYMLINK libspdk_trace.so 00:06:50.512 CC lib/thread/thread.o 00:06:50.512 CC lib/thread/iobuf.o 00:06:50.512 CC lib/sock/sock.o 00:06:50.512 CC lib/sock/sock_rpc.o 00:06:51.080 LIB libspdk_sock.a 00:06:51.080 SO libspdk_sock.so.10.0 00:06:51.080 SYMLINK libspdk_sock.so 00:06:51.340 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:51.340 CC lib/nvme/nvme_ctrlr.o 00:06:51.340 CC lib/nvme/nvme_fabric.o 00:06:51.340 CC lib/nvme/nvme_ns_cmd.o 00:06:51.340 CC lib/nvme/nvme_ns.o 00:06:51.340 CC lib/nvme/nvme_pcie.o 00:06:51.340 CC lib/nvme/nvme_pcie_common.o 00:06:51.340 CC lib/nvme/nvme_qpair.o 00:06:51.340 CC lib/nvme/nvme.o 00:06:51.340 CC lib/nvme/nvme_quirks.o 00:06:51.340 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:51.340 CC lib/nvme/nvme_transport.o 00:06:51.340 CC lib/nvme/nvme_discovery.o 00:06:51.340 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:51.340 CC lib/nvme/nvme_tcp.o 00:06:51.340 CC lib/nvme/nvme_opal.o 00:06:51.340 CC lib/nvme/nvme_io_msg.o 00:06:51.340 CC lib/nvme/nvme_poll_group.o 00:06:51.340 CC lib/nvme/nvme_zns.o 00:06:51.340 CC lib/nvme/nvme_stubs.o 00:06:51.340 CC lib/nvme/nvme_auth.o 00:06:51.340 CC lib/nvme/nvme_cuse.o 00:06:51.599 CC lib/nvme/nvme_vfio_user.o 00:06:51.599 CC lib/nvme/nvme_rdma.o 00:06:51.858 LIB libspdk_thread.a 00:06:52.116 SO libspdk_thread.so.10.0 00:06:52.116 SYMLINK libspdk_thread.so 00:06:52.375 CC lib/blob/blobstore.o 00:06:52.375 CC lib/blob/zeroes.o 00:06:52.375 CC lib/blob/request.o 00:06:52.375 CC lib/blob/blob_bs_dev.o 00:06:52.375 CC lib/accel/accel.o 00:06:52.376 CC lib/accel/accel_rpc.o 00:06:52.376 CC lib/accel/accel_sw.o 00:06:52.376 CC lib/init/json_config.o 00:06:52.376 CC lib/init/subsystem.o 00:06:52.376 CC lib/init/subsystem_rpc.o 00:06:52.376 CC lib/init/rpc.o 00:06:52.376 CC lib/virtio/virtio.o 00:06:52.376 CC lib/virtio/virtio_vhost_user.o 00:06:52.376 CC lib/virtio/virtio_vfio_user.o 00:06:52.376 CC lib/vfu_tgt/tgt_endpoint.o 00:06:52.376 CC lib/virtio/virtio_pci.o 00:06:52.376 CC lib/vfu_tgt/tgt_rpc.o 00:06:52.635 LIB libspdk_init.a 00:06:52.894 SO libspdk_init.so.5.0 00:06:52.894 LIB libspdk_virtio.a 00:06:52.894 LIB libspdk_vfu_tgt.a 00:06:52.894 SO libspdk_virtio.so.7.0 00:06:52.894 SO libspdk_vfu_tgt.so.3.0 00:06:52.894 SYMLINK libspdk_init.so 00:06:52.894 SYMLINK libspdk_vfu_tgt.so 00:06:52.894 SYMLINK libspdk_virtio.so 00:06:53.152 CC lib/event/app.o 00:06:53.152 CC lib/event/reactor.o 00:06:53.152 CC lib/event/log_rpc.o 00:06:53.152 CC lib/event/app_rpc.o 00:06:53.153 CC lib/event/scheduler_static.o 00:06:53.411 LIB libspdk_accel.a 00:06:53.411 SO libspdk_accel.so.15.0 00:06:53.411 LIB libspdk_nvme.a 00:06:53.411 SYMLINK libspdk_accel.so 00:06:53.669 SO libspdk_nvme.so.13.0 00:06:53.670 LIB libspdk_event.a 00:06:53.670 SO libspdk_event.so.13.1 00:06:53.670 SYMLINK libspdk_event.so 00:06:53.928 CC lib/bdev/bdev.o 00:06:53.928 CC lib/bdev/bdev_rpc.o 00:06:53.928 CC lib/bdev/bdev_zone.o 00:06:53.928 CC lib/bdev/part.o 00:06:53.928 CC lib/bdev/scsi_nvme.o 00:06:53.928 SYMLINK libspdk_nvme.so 00:06:55.307 LIB libspdk_blob.a 00:06:55.307 SO libspdk_blob.so.11.0 00:06:55.307 SYMLINK libspdk_blob.so 00:06:55.875 CC lib/lvol/lvol.o 00:06:55.875 CC lib/blobfs/blobfs.o 00:06:55.875 CC lib/blobfs/tree.o 00:06:56.443 LIB libspdk_bdev.a 00:06:56.443 SO libspdk_bdev.so.15.0 00:06:56.443 LIB libspdk_lvol.a 00:06:56.443 SO libspdk_lvol.so.10.0 00:06:56.443 SYMLINK libspdk_bdev.so 00:06:56.443 LIB libspdk_blobfs.a 00:06:56.443 SYMLINK libspdk_lvol.so 00:06:56.703 SO libspdk_blobfs.so.10.0 00:06:56.703 SYMLINK libspdk_blobfs.so 00:06:56.964 CC lib/nvmf/ctrlr.o 00:06:56.964 CC lib/nvmf/ctrlr_discovery.o 00:06:56.964 CC lib/nvmf/ctrlr_bdev.o 00:06:56.964 CC lib/nvmf/nvmf.o 00:06:56.964 CC lib/nvmf/subsystem.o 00:06:56.965 CC lib/nvmf/nvmf_rpc.o 00:06:56.965 CC lib/nvmf/tcp.o 00:06:56.965 CC lib/nvmf/stubs.o 00:06:56.965 CC lib/nvmf/transport.o 00:06:56.965 CC lib/nvmf/vfio_user.o 00:06:56.965 CC lib/scsi/dev.o 00:06:56.965 CC lib/nvmf/mdns_server.o 00:06:56.965 CC lib/nvmf/auth.o 00:06:56.965 CC lib/scsi/lun.o 00:06:56.965 CC lib/scsi/port.o 00:06:56.965 CC lib/ublk/ublk.o 00:06:56.965 CC lib/nvmf/rdma.o 00:06:56.965 CC lib/ublk/ublk_rpc.o 00:06:56.965 CC lib/scsi/scsi.o 00:06:56.965 CC lib/nbd/nbd.o 00:06:56.965 CC lib/scsi/scsi_bdev.o 00:06:56.965 CC lib/nbd/nbd_rpc.o 00:06:56.965 CC lib/scsi/scsi_pr.o 00:06:56.965 CC lib/ftl/ftl_core.o 00:06:56.965 CC lib/ftl/ftl_init.o 00:06:56.965 CC lib/scsi/scsi_rpc.o 00:06:56.965 CC lib/scsi/task.o 00:06:56.965 CC lib/ftl/ftl_layout.o 00:06:56.965 CC lib/ftl/ftl_debug.o 00:06:56.965 CC lib/ftl/ftl_io.o 00:06:56.965 CC lib/ftl/ftl_sb.o 00:06:56.965 CC lib/ftl/ftl_l2p.o 00:06:56.965 CC lib/ftl/ftl_l2p_flat.o 00:06:56.965 CC lib/ftl/ftl_nv_cache.o 00:06:56.965 CC lib/ftl/ftl_band.o 00:06:56.965 CC lib/ftl/ftl_band_ops.o 00:06:56.965 CC lib/ftl/ftl_writer.o 00:06:56.965 CC lib/ftl/ftl_rq.o 00:06:56.965 CC lib/ftl/ftl_reloc.o 00:06:56.965 CC lib/ftl/ftl_l2p_cache.o 00:06:56.965 CC lib/ftl/ftl_p2l.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:56.965 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:56.965 CC lib/ftl/utils/ftl_conf.o 00:06:56.965 CC lib/ftl/utils/ftl_md.o 00:06:56.965 CC lib/ftl/utils/ftl_bitmap.o 00:06:56.965 CC lib/ftl/utils/ftl_property.o 00:06:56.965 CC lib/ftl/utils/ftl_mempool.o 00:06:56.965 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:56.965 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:56.965 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:56.965 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:56.965 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:56.965 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:56.965 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:56.965 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:56.965 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:56.965 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:56.965 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:56.965 CC lib/ftl/base/ftl_base_dev.o 00:06:56.965 CC lib/ftl/base/ftl_base_bdev.o 00:06:56.965 CC lib/ftl/ftl_trace.o 00:06:57.534 LIB libspdk_nbd.a 00:06:57.534 SO libspdk_nbd.so.7.0 00:06:57.534 LIB libspdk_scsi.a 00:06:57.534 SYMLINK libspdk_nbd.so 00:06:57.534 SO libspdk_scsi.so.9.0 00:06:57.534 LIB libspdk_ublk.a 00:06:57.803 SO libspdk_ublk.so.3.0 00:06:57.803 SYMLINK libspdk_scsi.so 00:06:57.803 SYMLINK libspdk_ublk.so 00:06:58.062 LIB libspdk_ftl.a 00:06:58.062 CC lib/vhost/vhost.o 00:06:58.062 CC lib/vhost/vhost_rpc.o 00:06:58.062 CC lib/iscsi/conn.o 00:06:58.062 CC lib/vhost/vhost_scsi.o 00:06:58.062 CC lib/vhost/vhost_blk.o 00:06:58.062 CC lib/iscsi/init_grp.o 00:06:58.062 CC lib/iscsi/iscsi.o 00:06:58.062 CC lib/vhost/rte_vhost_user.o 00:06:58.062 CC lib/iscsi/md5.o 00:06:58.062 CC lib/iscsi/param.o 00:06:58.062 CC lib/iscsi/portal_grp.o 00:06:58.062 CC lib/iscsi/tgt_node.o 00:06:58.062 CC lib/iscsi/iscsi_rpc.o 00:06:58.062 CC lib/iscsi/iscsi_subsystem.o 00:06:58.062 CC lib/iscsi/task.o 00:06:58.320 SO libspdk_ftl.so.9.0 00:06:58.578 SYMLINK libspdk_ftl.so 00:06:58.838 LIB libspdk_nvmf.a 00:06:59.098 SO libspdk_nvmf.so.19.0 00:06:59.098 LIB libspdk_vhost.a 00:06:59.098 SO libspdk_vhost.so.8.0 00:06:59.358 SYMLINK libspdk_nvmf.so 00:06:59.358 SYMLINK libspdk_vhost.so 00:06:59.358 LIB libspdk_iscsi.a 00:06:59.618 SO libspdk_iscsi.so.8.0 00:06:59.618 SYMLINK libspdk_iscsi.so 00:07:00.187 CC module/vfu_device/vfu_virtio.o 00:07:00.187 CC module/vfu_device/vfu_virtio_scsi.o 00:07:00.187 CC module/vfu_device/vfu_virtio_blk.o 00:07:00.187 CC module/vfu_device/vfu_virtio_rpc.o 00:07:00.187 CC module/env_dpdk/env_dpdk_rpc.o 00:07:00.446 CC module/scheduler/gscheduler/gscheduler.o 00:07:00.446 CC module/blob/bdev/blob_bdev.o 00:07:00.446 CC module/accel/iaa/accel_iaa.o 00:07:00.446 CC module/keyring/file/keyring.o 00:07:00.446 CC module/keyring/file/keyring_rpc.o 00:07:00.446 CC module/accel/iaa/accel_iaa_rpc.o 00:07:00.446 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:00.446 CC module/accel/error/accel_error.o 00:07:00.446 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:00.446 CC module/accel/error/accel_error_rpc.o 00:07:00.446 LIB libspdk_env_dpdk_rpc.a 00:07:00.446 CC module/keyring/linux/keyring.o 00:07:00.446 CC module/keyring/linux/keyring_rpc.o 00:07:00.446 CC module/accel/ioat/accel_ioat.o 00:07:00.446 CC module/accel/dsa/accel_dsa.o 00:07:00.446 CC module/accel/dsa/accel_dsa_rpc.o 00:07:00.446 CC module/accel/ioat/accel_ioat_rpc.o 00:07:00.446 CC module/sock/posix/posix.o 00:07:00.446 SO libspdk_env_dpdk_rpc.so.6.0 00:07:00.446 SYMLINK libspdk_env_dpdk_rpc.so 00:07:00.446 LIB libspdk_scheduler_gscheduler.a 00:07:00.446 LIB libspdk_accel_iaa.a 00:07:00.446 LIB libspdk_keyring_file.a 00:07:00.446 LIB libspdk_scheduler_dpdk_governor.a 00:07:00.446 LIB libspdk_keyring_linux.a 00:07:00.704 LIB libspdk_accel_error.a 00:07:00.704 SO libspdk_scheduler_gscheduler.so.4.0 00:07:00.704 SO libspdk_accel_iaa.so.3.0 00:07:00.704 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:00.704 SO libspdk_keyring_file.so.1.0 00:07:00.704 LIB libspdk_scheduler_dynamic.a 00:07:00.704 SO libspdk_keyring_linux.so.1.0 00:07:00.705 LIB libspdk_accel_ioat.a 00:07:00.705 SO libspdk_accel_error.so.2.0 00:07:00.705 SYMLINK libspdk_scheduler_gscheduler.so 00:07:00.705 SO libspdk_scheduler_dynamic.so.4.0 00:07:00.705 LIB libspdk_blob_bdev.a 00:07:00.705 SO libspdk_accel_ioat.so.6.0 00:07:00.705 SYMLINK libspdk_accel_iaa.so 00:07:00.705 SYMLINK libspdk_keyring_linux.so 00:07:00.705 LIB libspdk_accel_dsa.a 00:07:00.705 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:00.705 SYMLINK libspdk_keyring_file.so 00:07:00.705 SO libspdk_blob_bdev.so.11.0 00:07:00.705 SYMLINK libspdk_accel_error.so 00:07:00.705 SYMLINK libspdk_scheduler_dynamic.so 00:07:00.705 SO libspdk_accel_dsa.so.5.0 00:07:00.705 SYMLINK libspdk_accel_ioat.so 00:07:00.705 SYMLINK libspdk_blob_bdev.so 00:07:00.705 SYMLINK libspdk_accel_dsa.so 00:07:00.705 LIB libspdk_vfu_device.a 00:07:00.964 SO libspdk_vfu_device.so.3.0 00:07:00.964 SYMLINK libspdk_vfu_device.so 00:07:01.223 LIB libspdk_sock_posix.a 00:07:01.223 SO libspdk_sock_posix.so.6.0 00:07:01.223 SYMLINK libspdk_sock_posix.so 00:07:01.223 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:01.223 CC module/blobfs/bdev/blobfs_bdev.o 00:07:01.223 CC module/bdev/lvol/vbdev_lvol.o 00:07:01.223 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:01.223 CC module/bdev/gpt/vbdev_gpt.o 00:07:01.223 CC module/bdev/gpt/gpt.o 00:07:01.223 CC module/bdev/error/vbdev_error.o 00:07:01.223 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:01.223 CC module/bdev/nvme/bdev_nvme.o 00:07:01.223 CC module/bdev/passthru/vbdev_passthru.o 00:07:01.223 CC module/bdev/error/vbdev_error_rpc.o 00:07:01.223 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:01.223 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:01.223 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:01.223 CC module/bdev/nvme/nvme_rpc.o 00:07:01.223 CC module/bdev/null/bdev_null.o 00:07:01.223 CC module/bdev/delay/vbdev_delay.o 00:07:01.223 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:01.223 CC module/bdev/split/vbdev_split.o 00:07:01.223 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:01.223 CC module/bdev/nvme/bdev_mdns_client.o 00:07:01.223 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:01.223 CC module/bdev/null/bdev_null_rpc.o 00:07:01.223 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:01.223 CC module/bdev/split/vbdev_split_rpc.o 00:07:01.223 CC module/bdev/nvme/vbdev_opal.o 00:07:01.223 CC module/bdev/ftl/bdev_ftl.o 00:07:01.223 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:01.223 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:01.223 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:01.223 CC module/bdev/raid/bdev_raid_rpc.o 00:07:01.223 CC module/bdev/raid/bdev_raid.o 00:07:01.223 CC module/bdev/raid/bdev_raid_sb.o 00:07:01.223 CC module/bdev/iscsi/bdev_iscsi.o 00:07:01.223 CC module/bdev/raid/raid1.o 00:07:01.223 CC module/bdev/raid/raid0.o 00:07:01.223 CC module/bdev/aio/bdev_aio_rpc.o 00:07:01.223 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:01.223 CC module/bdev/aio/bdev_aio.o 00:07:01.223 CC module/bdev/malloc/bdev_malloc.o 00:07:01.223 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:01.223 CC module/bdev/raid/concat.o 00:07:01.481 LIB libspdk_blobfs_bdev.a 00:07:01.481 SO libspdk_blobfs_bdev.so.6.0 00:07:01.741 LIB libspdk_bdev_gpt.a 00:07:01.741 SYMLINK libspdk_blobfs_bdev.so 00:07:01.741 LIB libspdk_bdev_error.a 00:07:01.741 LIB libspdk_bdev_split.a 00:07:01.741 LIB libspdk_bdev_null.a 00:07:01.741 SO libspdk_bdev_gpt.so.6.0 00:07:01.741 SO libspdk_bdev_error.so.6.0 00:07:01.741 LIB libspdk_bdev_ftl.a 00:07:01.741 SO libspdk_bdev_split.so.6.0 00:07:01.741 LIB libspdk_bdev_passthru.a 00:07:01.741 SO libspdk_bdev_null.so.6.0 00:07:01.741 LIB libspdk_bdev_aio.a 00:07:01.741 LIB libspdk_bdev_zone_block.a 00:07:01.741 SO libspdk_bdev_ftl.so.6.0 00:07:01.741 SYMLINK libspdk_bdev_gpt.so 00:07:01.741 SO libspdk_bdev_passthru.so.6.0 00:07:01.741 LIB libspdk_bdev_iscsi.a 00:07:01.741 LIB libspdk_bdev_delay.a 00:07:01.741 SYMLINK libspdk_bdev_error.so 00:07:01.741 LIB libspdk_bdev_malloc.a 00:07:01.741 SYMLINK libspdk_bdev_split.so 00:07:01.741 SO libspdk_bdev_aio.so.6.0 00:07:01.741 SO libspdk_bdev_zone_block.so.6.0 00:07:01.741 LIB libspdk_bdev_virtio.a 00:07:01.741 SYMLINK libspdk_bdev_null.so 00:07:01.741 SO libspdk_bdev_malloc.so.6.0 00:07:01.741 SO libspdk_bdev_iscsi.so.6.0 00:07:01.741 SO libspdk_bdev_delay.so.6.0 00:07:01.741 SYMLINK libspdk_bdev_ftl.so 00:07:01.741 SYMLINK libspdk_bdev_passthru.so 00:07:01.741 LIB libspdk_bdev_lvol.a 00:07:01.741 SO libspdk_bdev_virtio.so.6.0 00:07:01.741 SYMLINK libspdk_bdev_aio.so 00:07:02.000 SYMLINK libspdk_bdev_zone_block.so 00:07:02.000 SYMLINK libspdk_bdev_delay.so 00:07:02.000 SYMLINK libspdk_bdev_iscsi.so 00:07:02.000 SO libspdk_bdev_lvol.so.6.0 00:07:02.000 SYMLINK libspdk_bdev_malloc.so 00:07:02.000 SYMLINK libspdk_bdev_virtio.so 00:07:02.000 SYMLINK libspdk_bdev_lvol.so 00:07:02.259 LIB libspdk_bdev_raid.a 00:07:02.259 SO libspdk_bdev_raid.so.6.0 00:07:02.518 SYMLINK libspdk_bdev_raid.so 00:07:03.456 LIB libspdk_bdev_nvme.a 00:07:03.456 SO libspdk_bdev_nvme.so.7.0 00:07:03.715 SYMLINK libspdk_bdev_nvme.so 00:07:04.345 CC module/event/subsystems/vmd/vmd.o 00:07:04.345 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:04.345 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:04.345 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:04.345 CC module/event/subsystems/keyring/keyring.o 00:07:04.345 CC module/event/subsystems/sock/sock.o 00:07:04.345 CC module/event/subsystems/iobuf/iobuf.o 00:07:04.345 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:04.345 CC module/event/subsystems/scheduler/scheduler.o 00:07:04.618 LIB libspdk_event_vmd.a 00:07:04.618 LIB libspdk_event_vhost_blk.a 00:07:04.618 LIB libspdk_event_vfu_tgt.a 00:07:04.618 LIB libspdk_event_keyring.a 00:07:04.618 LIB libspdk_event_sock.a 00:07:04.618 LIB libspdk_event_iobuf.a 00:07:04.618 LIB libspdk_event_scheduler.a 00:07:04.618 SO libspdk_event_vhost_blk.so.3.0 00:07:04.618 SO libspdk_event_vfu_tgt.so.3.0 00:07:04.618 SO libspdk_event_vmd.so.6.0 00:07:04.618 SO libspdk_event_keyring.so.1.0 00:07:04.618 SO libspdk_event_sock.so.5.0 00:07:04.618 SO libspdk_event_iobuf.so.3.0 00:07:04.618 SO libspdk_event_scheduler.so.4.0 00:07:04.618 SYMLINK libspdk_event_vhost_blk.so 00:07:04.618 SYMLINK libspdk_event_vfu_tgt.so 00:07:04.618 SYMLINK libspdk_event_vmd.so 00:07:04.618 SYMLINK libspdk_event_keyring.so 00:07:04.618 SYMLINK libspdk_event_sock.so 00:07:04.618 SYMLINK libspdk_event_scheduler.so 00:07:04.618 SYMLINK libspdk_event_iobuf.so 00:07:05.187 CC module/event/subsystems/accel/accel.o 00:07:05.187 LIB libspdk_event_accel.a 00:07:05.187 SO libspdk_event_accel.so.6.0 00:07:05.187 SYMLINK libspdk_event_accel.so 00:07:05.755 CC module/event/subsystems/bdev/bdev.o 00:07:05.755 LIB libspdk_event_bdev.a 00:07:06.013 SO libspdk_event_bdev.so.6.0 00:07:06.013 SYMLINK libspdk_event_bdev.so 00:07:06.272 CC module/event/subsystems/ublk/ublk.o 00:07:06.272 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:06.272 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:06.272 CC module/event/subsystems/nbd/nbd.o 00:07:06.272 CC module/event/subsystems/scsi/scsi.o 00:07:06.531 LIB libspdk_event_ublk.a 00:07:06.531 LIB libspdk_event_nbd.a 00:07:06.531 SO libspdk_event_ublk.so.3.0 00:07:06.531 LIB libspdk_event_scsi.a 00:07:06.531 SO libspdk_event_nbd.so.6.0 00:07:06.531 SO libspdk_event_scsi.so.6.0 00:07:06.531 LIB libspdk_event_nvmf.a 00:07:06.531 SYMLINK libspdk_event_ublk.so 00:07:06.531 SYMLINK libspdk_event_nbd.so 00:07:06.531 SO libspdk_event_nvmf.so.6.0 00:07:06.531 SYMLINK libspdk_event_scsi.so 00:07:06.791 SYMLINK libspdk_event_nvmf.so 00:07:07.050 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:07.050 CC module/event/subsystems/iscsi/iscsi.o 00:07:07.309 LIB libspdk_event_vhost_scsi.a 00:07:07.309 LIB libspdk_event_iscsi.a 00:07:07.309 SO libspdk_event_vhost_scsi.so.3.0 00:07:07.309 SO libspdk_event_iscsi.so.6.0 00:07:07.309 SYMLINK libspdk_event_vhost_scsi.so 00:07:07.309 SYMLINK libspdk_event_iscsi.so 00:07:07.569 SO libspdk.so.6.0 00:07:07.569 SYMLINK libspdk.so 00:07:07.828 CXX app/trace/trace.o 00:07:07.828 CC app/spdk_nvme_perf/perf.o 00:07:07.828 CC app/spdk_lspci/spdk_lspci.o 00:07:07.828 CC app/trace_record/trace_record.o 00:07:07.828 CC app/spdk_top/spdk_top.o 00:07:07.828 CC app/spdk_nvme_discover/discovery_aer.o 00:07:07.828 TEST_HEADER include/spdk/accel.h 00:07:07.828 TEST_HEADER include/spdk/accel_module.h 00:07:07.828 CC test/rpc_client/rpc_client_test.o 00:07:07.828 TEST_HEADER include/spdk/barrier.h 00:07:07.828 TEST_HEADER include/spdk/assert.h 00:07:07.828 TEST_HEADER include/spdk/base64.h 00:07:07.828 TEST_HEADER include/spdk/bdev.h 00:07:07.828 TEST_HEADER include/spdk/bdev_module.h 00:07:07.828 CC app/spdk_nvme_identify/identify.o 00:07:07.828 TEST_HEADER include/spdk/bdev_zone.h 00:07:08.100 TEST_HEADER include/spdk/bit_pool.h 00:07:08.100 TEST_HEADER include/spdk/bit_array.h 00:07:08.100 TEST_HEADER include/spdk/blob_bdev.h 00:07:08.100 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:08.100 TEST_HEADER include/spdk/blobfs.h 00:07:08.100 TEST_HEADER include/spdk/blob.h 00:07:08.100 TEST_HEADER include/spdk/conf.h 00:07:08.100 TEST_HEADER include/spdk/config.h 00:07:08.100 TEST_HEADER include/spdk/cpuset.h 00:07:08.100 TEST_HEADER include/spdk/crc16.h 00:07:08.100 TEST_HEADER include/spdk/crc32.h 00:07:08.100 TEST_HEADER include/spdk/crc64.h 00:07:08.100 TEST_HEADER include/spdk/dif.h 00:07:08.100 TEST_HEADER include/spdk/dma.h 00:07:08.100 TEST_HEADER include/spdk/endian.h 00:07:08.100 TEST_HEADER include/spdk/env_dpdk.h 00:07:08.100 TEST_HEADER include/spdk/env.h 00:07:08.100 TEST_HEADER include/spdk/fd_group.h 00:07:08.100 TEST_HEADER include/spdk/event.h 00:07:08.100 CC app/nvmf_tgt/nvmf_main.o 00:07:08.100 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:08.100 TEST_HEADER include/spdk/fd.h 00:07:08.100 TEST_HEADER include/spdk/file.h 00:07:08.100 TEST_HEADER include/spdk/ftl.h 00:07:08.100 TEST_HEADER include/spdk/gpt_spec.h 00:07:08.100 TEST_HEADER include/spdk/hexlify.h 00:07:08.100 TEST_HEADER include/spdk/histogram_data.h 00:07:08.100 TEST_HEADER include/spdk/idxd.h 00:07:08.100 TEST_HEADER include/spdk/idxd_spec.h 00:07:08.100 TEST_HEADER include/spdk/init.h 00:07:08.100 TEST_HEADER include/spdk/ioat_spec.h 00:07:08.100 TEST_HEADER include/spdk/ioat.h 00:07:08.100 TEST_HEADER include/spdk/iscsi_spec.h 00:07:08.100 TEST_HEADER include/spdk/json.h 00:07:08.100 TEST_HEADER include/spdk/jsonrpc.h 00:07:08.100 TEST_HEADER include/spdk/keyring_module.h 00:07:08.100 TEST_HEADER include/spdk/keyring.h 00:07:08.100 TEST_HEADER include/spdk/likely.h 00:07:08.100 TEST_HEADER include/spdk/log.h 00:07:08.100 CC app/spdk_dd/spdk_dd.o 00:07:08.100 TEST_HEADER include/spdk/lvol.h 00:07:08.100 TEST_HEADER include/spdk/memory.h 00:07:08.100 TEST_HEADER include/spdk/nbd.h 00:07:08.100 TEST_HEADER include/spdk/mmio.h 00:07:08.100 TEST_HEADER include/spdk/notify.h 00:07:08.100 TEST_HEADER include/spdk/nvme.h 00:07:08.100 TEST_HEADER include/spdk/nvme_intel.h 00:07:08.100 CC app/vhost/vhost.o 00:07:08.100 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:08.100 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:08.100 TEST_HEADER include/spdk/nvme_spec.h 00:07:08.100 TEST_HEADER include/spdk/nvme_zns.h 00:07:08.100 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:08.100 TEST_HEADER include/spdk/nvmf.h 00:07:08.100 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:08.100 TEST_HEADER include/spdk/nvmf_spec.h 00:07:08.100 CC app/iscsi_tgt/iscsi_tgt.o 00:07:08.100 TEST_HEADER include/spdk/opal.h 00:07:08.100 TEST_HEADER include/spdk/nvmf_transport.h 00:07:08.100 TEST_HEADER include/spdk/opal_spec.h 00:07:08.100 TEST_HEADER include/spdk/pci_ids.h 00:07:08.100 TEST_HEADER include/spdk/pipe.h 00:07:08.100 TEST_HEADER include/spdk/queue.h 00:07:08.100 TEST_HEADER include/spdk/reduce.h 00:07:08.100 TEST_HEADER include/spdk/rpc.h 00:07:08.100 TEST_HEADER include/spdk/scheduler.h 00:07:08.100 TEST_HEADER include/spdk/scsi.h 00:07:08.100 TEST_HEADER include/spdk/scsi_spec.h 00:07:08.100 TEST_HEADER include/spdk/sock.h 00:07:08.100 TEST_HEADER include/spdk/stdinc.h 00:07:08.100 TEST_HEADER include/spdk/string.h 00:07:08.100 TEST_HEADER include/spdk/thread.h 00:07:08.100 TEST_HEADER include/spdk/trace.h 00:07:08.100 TEST_HEADER include/spdk/trace_parser.h 00:07:08.100 TEST_HEADER include/spdk/tree.h 00:07:08.100 TEST_HEADER include/spdk/ublk.h 00:07:08.100 TEST_HEADER include/spdk/uuid.h 00:07:08.100 TEST_HEADER include/spdk/util.h 00:07:08.100 TEST_HEADER include/spdk/version.h 00:07:08.100 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:08.100 TEST_HEADER include/spdk/vhost.h 00:07:08.100 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:08.100 CC app/spdk_tgt/spdk_tgt.o 00:07:08.100 TEST_HEADER include/spdk/xor.h 00:07:08.100 TEST_HEADER include/spdk/vmd.h 00:07:08.100 TEST_HEADER include/spdk/zipf.h 00:07:08.100 CXX test/cpp_headers/accel.o 00:07:08.100 CXX test/cpp_headers/accel_module.o 00:07:08.100 CXX test/cpp_headers/assert.o 00:07:08.100 CXX test/cpp_headers/barrier.o 00:07:08.100 CXX test/cpp_headers/base64.o 00:07:08.100 CXX test/cpp_headers/bdev_module.o 00:07:08.100 CXX test/cpp_headers/bdev.o 00:07:08.100 CXX test/cpp_headers/bdev_zone.o 00:07:08.100 CXX test/cpp_headers/bit_array.o 00:07:08.100 CXX test/cpp_headers/blob_bdev.o 00:07:08.100 CXX test/cpp_headers/bit_pool.o 00:07:08.100 CXX test/cpp_headers/blobfs_bdev.o 00:07:08.100 CXX test/cpp_headers/blobfs.o 00:07:08.100 CXX test/cpp_headers/blob.o 00:07:08.100 CXX test/cpp_headers/conf.o 00:07:08.100 CXX test/cpp_headers/config.o 00:07:08.100 CXX test/cpp_headers/cpuset.o 00:07:08.100 CXX test/cpp_headers/crc16.o 00:07:08.100 CXX test/cpp_headers/crc32.o 00:07:08.100 CXX test/cpp_headers/crc64.o 00:07:08.100 CXX test/cpp_headers/dif.o 00:07:08.100 CXX test/cpp_headers/dma.o 00:07:08.100 CXX test/cpp_headers/endian.o 00:07:08.100 CXX test/cpp_headers/env_dpdk.o 00:07:08.100 CXX test/cpp_headers/env.o 00:07:08.100 CXX test/cpp_headers/event.o 00:07:08.101 CXX test/cpp_headers/fd_group.o 00:07:08.101 CXX test/cpp_headers/fd.o 00:07:08.101 CXX test/cpp_headers/file.o 00:07:08.101 CXX test/cpp_headers/gpt_spec.o 00:07:08.101 CXX test/cpp_headers/ftl.o 00:07:08.101 CXX test/cpp_headers/histogram_data.o 00:07:08.101 CXX test/cpp_headers/hexlify.o 00:07:08.101 CXX test/cpp_headers/idxd.o 00:07:08.101 CXX test/cpp_headers/idxd_spec.o 00:07:08.101 CXX test/cpp_headers/init.o 00:07:08.364 CXX test/cpp_headers/ioat.o 00:07:08.364 CC examples/vmd/lsvmd/lsvmd.o 00:07:08.364 CC examples/nvme/hello_world/hello_world.o 00:07:08.364 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:08.364 CC examples/vmd/led/led.o 00:07:08.364 CC examples/nvme/arbitration/arbitration.o 00:07:08.364 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:08.364 CC examples/nvme/hotplug/hotplug.o 00:07:08.364 CC examples/nvme/abort/abort.o 00:07:08.364 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:08.364 CC examples/util/tls_psk/tls_psk_print.o 00:07:08.364 CC test/nvme/reset/reset.o 00:07:08.364 CC examples/nvme/reconnect/reconnect.o 00:07:08.364 CC test/nvme/aer/aer.o 00:07:08.364 CC test/nvme/startup/startup.o 00:07:08.364 CC examples/sock/hello_world/hello_sock.o 00:07:08.364 CC examples/accel/perf/accel_perf.o 00:07:08.364 CC app/fio/nvme/fio_plugin.o 00:07:08.364 CC test/nvme/e2edp/nvme_dp.o 00:07:08.364 CC test/nvme/err_injection/err_injection.o 00:07:08.364 CC examples/idxd/perf/perf.o 00:07:08.364 CC test/event/event_perf/event_perf.o 00:07:08.364 CC test/nvme/fused_ordering/fused_ordering.o 00:07:08.364 CC test/event/reactor_perf/reactor_perf.o 00:07:08.364 CC test/nvme/cuse/cuse.o 00:07:08.364 CC test/app/jsoncat/jsoncat.o 00:07:08.364 CC test/nvme/sgl/sgl.o 00:07:08.364 CC test/env/memory/memory_ut.o 00:07:08.364 CC test/nvme/overhead/overhead.o 00:07:08.364 CC test/event/reactor/reactor.o 00:07:08.364 CC test/app/histogram_perf/histogram_perf.o 00:07:08.364 CC test/nvme/fdp/fdp.o 00:07:08.364 CC test/event/app_repeat/app_repeat.o 00:07:08.364 CC test/nvme/boot_partition/boot_partition.o 00:07:08.364 CC test/nvme/compliance/nvme_compliance.o 00:07:08.364 CC examples/util/zipf/zipf.o 00:07:08.364 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:08.364 CC test/nvme/reserve/reserve.o 00:07:08.364 CC test/nvme/connect_stress/connect_stress.o 00:07:08.364 CC test/app/stub/stub.o 00:07:08.364 CC examples/ioat/perf/perf.o 00:07:08.364 CC test/thread/poller_perf/poller_perf.o 00:07:08.364 CC test/env/vtophys/vtophys.o 00:07:08.364 CC examples/ioat/verify/verify.o 00:07:08.364 CC test/nvme/simple_copy/simple_copy.o 00:07:08.364 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:08.364 CC examples/blob/hello_world/hello_blob.o 00:07:08.364 CC examples/nvmf/nvmf/nvmf.o 00:07:08.364 CC test/env/pci/pci_ut.o 00:07:08.364 CC test/dma/test_dma/test_dma.o 00:07:08.364 CC examples/blob/cli/blobcli.o 00:07:08.364 CC test/blobfs/mkfs/mkfs.o 00:07:08.364 CC examples/bdev/bdevperf/bdevperf.o 00:07:08.364 CC test/bdev/bdevio/bdevio.o 00:07:08.364 CC test/accel/dif/dif.o 00:07:08.364 CC test/event/scheduler/scheduler.o 00:07:08.364 CC examples/bdev/hello_world/hello_bdev.o 00:07:08.364 CC test/app/bdev_svc/bdev_svc.o 00:07:08.364 CC app/fio/bdev/fio_plugin.o 00:07:08.364 LINK spdk_lspci 00:07:08.364 CC examples/thread/thread/thread_ex.o 00:07:08.634 LINK nvmf_tgt 00:07:08.905 LINK interrupt_tgt 00:07:08.905 LINK vhost 00:07:08.905 LINK rpc_client_test 00:07:08.905 CC test/lvol/esnap/esnap.o 00:07:08.905 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:08.905 CC test/env/mem_callbacks/mem_callbacks.o 00:07:08.905 LINK spdk_trace_record 00:07:08.905 LINK lsvmd 00:07:08.905 LINK spdk_nvme_discover 00:07:08.905 LINK iscsi_tgt 00:07:08.905 CXX test/cpp_headers/ioat_spec.o 00:07:08.905 LINK event_perf 00:07:08.905 LINK pmr_persistence 00:07:08.905 LINK led 00:07:08.905 CXX test/cpp_headers/iscsi_spec.o 00:07:08.905 LINK reactor 00:07:08.905 LINK poller_perf 00:07:08.905 LINK histogram_perf 00:07:08.905 LINK jsoncat 00:07:08.905 CXX test/cpp_headers/json.o 00:07:08.905 LINK reactor_perf 00:07:08.905 LINK spdk_tgt 00:07:08.905 CXX test/cpp_headers/jsonrpc.o 00:07:08.905 CXX test/cpp_headers/keyring_module.o 00:07:08.905 LINK app_repeat 00:07:08.905 CXX test/cpp_headers/keyring.o 00:07:08.905 CXX test/cpp_headers/likely.o 00:07:08.905 LINK cmb_copy 00:07:08.905 CXX test/cpp_headers/log.o 00:07:08.905 LINK startup 00:07:08.905 CXX test/cpp_headers/lvol.o 00:07:08.905 CXX test/cpp_headers/memory.o 00:07:08.905 LINK connect_stress 00:07:08.905 CXX test/cpp_headers/mmio.o 00:07:08.905 CXX test/cpp_headers/nbd.o 00:07:08.905 CXX test/cpp_headers/notify.o 00:07:08.905 LINK zipf 00:07:08.905 CXX test/cpp_headers/nvme.o 00:07:08.905 CXX test/cpp_headers/nvme_intel.o 00:07:08.905 CXX test/cpp_headers/nvme_ocssd.o 00:07:08.905 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:08.905 LINK vtophys 00:07:09.172 CXX test/cpp_headers/nvme_spec.o 00:07:09.172 LINK spdk_dd 00:07:09.172 LINK env_dpdk_post_init 00:07:09.172 CXX test/cpp_headers/nvme_zns.o 00:07:09.172 CXX test/cpp_headers/nvmf_cmd.o 00:07:09.172 LINK hello_sock 00:07:09.172 LINK verify 00:07:09.172 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:09.172 CXX test/cpp_headers/nvmf.o 00:07:09.172 LINK boot_partition 00:07:09.172 CXX test/cpp_headers/nvmf_spec.o 00:07:09.172 CXX test/cpp_headers/nvmf_transport.o 00:07:09.172 LINK reserve 00:07:09.172 CXX test/cpp_headers/opal.o 00:07:09.172 LINK hotplug 00:07:09.172 CXX test/cpp_headers/opal_spec.o 00:07:09.172 LINK err_injection 00:07:09.172 CXX test/cpp_headers/pci_ids.o 00:07:09.172 CXX test/cpp_headers/pipe.o 00:07:09.172 CXX test/cpp_headers/queue.o 00:07:09.172 LINK bdev_svc 00:07:09.172 CXX test/cpp_headers/rpc.o 00:07:09.172 CXX test/cpp_headers/scheduler.o 00:07:09.172 CXX test/cpp_headers/reduce.o 00:07:09.172 CXX test/cpp_headers/scsi.o 00:07:09.172 CXX test/cpp_headers/scsi_spec.o 00:07:09.172 LINK stub 00:07:09.172 CXX test/cpp_headers/sock.o 00:07:09.172 LINK mkfs 00:07:09.172 LINK ioat_perf 00:07:09.172 LINK doorbell_aers 00:07:09.172 LINK fused_ordering 00:07:09.172 LINK simple_copy 00:07:09.172 CXX test/cpp_headers/stdinc.o 00:07:09.172 LINK sgl 00:07:09.172 LINK hello_world 00:07:09.172 LINK reset 00:07:09.172 LINK nvme_dp 00:07:09.172 LINK scheduler 00:07:09.172 CXX test/cpp_headers/string.o 00:07:09.172 CXX test/cpp_headers/thread.o 00:07:09.172 LINK overhead 00:07:09.172 LINK hello_bdev 00:07:09.172 CXX test/cpp_headers/trace.o 00:07:09.172 LINK nvme_compliance 00:07:09.172 LINK hello_blob 00:07:09.172 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:09.172 CXX test/cpp_headers/trace_parser.o 00:07:09.172 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:09.172 CXX test/cpp_headers/tree.o 00:07:09.172 LINK aer 00:07:09.172 LINK idxd_perf 00:07:09.172 LINK fdp 00:07:09.172 LINK thread 00:07:09.435 LINK abort 00:07:09.435 CXX test/cpp_headers/ublk.o 00:07:09.435 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:09.435 LINK nvmf 00:07:09.435 LINK arbitration 00:07:09.435 CXX test/cpp_headers/util.o 00:07:09.435 CXX test/cpp_headers/uuid.o 00:07:09.435 CXX test/cpp_headers/version.o 00:07:09.435 LINK reconnect 00:07:09.435 CXX test/cpp_headers/vfio_user_pci.o 00:07:09.435 LINK spdk_trace 00:07:09.435 CXX test/cpp_headers/vhost.o 00:07:09.435 CXX test/cpp_headers/vfio_user_spec.o 00:07:09.435 CXX test/cpp_headers/vmd.o 00:07:09.435 CXX test/cpp_headers/xor.o 00:07:09.435 CXX test/cpp_headers/zipf.o 00:07:09.435 LINK test_dma 00:07:09.435 LINK bdevio 00:07:09.435 LINK tls_psk_print 00:07:09.435 LINK nvme_manage 00:07:09.435 LINK pci_ut 00:07:09.693 LINK accel_perf 00:07:09.693 LINK dif 00:07:09.693 LINK spdk_nvme 00:07:09.693 LINK spdk_bdev 00:07:09.693 LINK blobcli 00:07:09.693 LINK nvme_fuzz 00:07:09.693 LINK spdk_nvme_perf 00:07:09.952 LINK spdk_nvme_identify 00:07:09.952 LINK spdk_top 00:07:09.952 LINK vhost_fuzz 00:07:09.952 LINK mem_callbacks 00:07:10.211 LINK bdevperf 00:07:10.211 LINK cuse 00:07:10.470 LINK memory_ut 00:07:11.039 LINK iscsi_fuzz 00:07:14.443 LINK esnap 00:07:14.443 00:07:14.443 real 0m55.231s 00:07:14.443 user 7m56.180s 00:07:14.443 sys 5m2.211s 00:07:14.443 13:35:28 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:07:14.443 13:35:28 make -- common/autotest_common.sh@10 -- $ set +x 00:07:14.443 ************************************ 00:07:14.443 END TEST make 00:07:14.443 ************************************ 00:07:14.443 13:35:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:14.443 13:35:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:14.443 13:35:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:14.443 13:35:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.443 13:35:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:14.443 13:35:28 -- pm/common@44 -- $ pid=1093703 00:07:14.443 13:35:28 -- pm/common@50 -- $ kill -TERM 1093703 00:07:14.443 13:35:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.443 13:35:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:14.443 13:35:28 -- pm/common@44 -- $ pid=1093704 00:07:14.443 13:35:28 -- pm/common@50 -- $ kill -TERM 1093704 00:07:14.443 13:35:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.443 13:35:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:14.443 13:35:28 -- pm/common@44 -- $ pid=1093706 00:07:14.443 13:35:28 -- pm/common@50 -- $ kill -TERM 1093706 00:07:14.443 13:35:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.443 13:35:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:14.443 13:35:28 -- pm/common@44 -- $ pid=1093730 00:07:14.443 13:35:28 -- pm/common@50 -- $ sudo -E kill -TERM 1093730 00:07:14.443 13:35:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.443 13:35:28 -- nvmf/common.sh@7 -- # uname -s 00:07:14.443 13:35:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.443 13:35:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.443 13:35:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.443 13:35:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.443 13:35:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.443 13:35:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.443 13:35:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.443 13:35:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.443 13:35:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.443 13:35:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.443 13:35:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:07:14.443 13:35:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:07:14.443 13:35:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.443 13:35:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.443 13:35:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.443 13:35:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.443 13:35:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.443 13:35:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.444 13:35:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.444 13:35:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.444 13:35:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.444 13:35:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.444 13:35:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.444 13:35:28 -- paths/export.sh@5 -- # export PATH 00:07:14.444 13:35:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.444 13:35:28 -- nvmf/common.sh@47 -- # : 0 00:07:14.444 13:35:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.444 13:35:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.444 13:35:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.444 13:35:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.444 13:35:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.444 13:35:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.444 13:35:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.444 13:35:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.444 13:35:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:14.444 13:35:28 -- spdk/autotest.sh@32 -- # uname -s 00:07:14.444 13:35:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:14.444 13:35:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:14.444 13:35:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:14.444 13:35:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:14.444 13:35:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:14.444 13:35:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:14.444 13:35:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:14.444 13:35:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:14.444 13:35:28 -- spdk/autotest.sh@48 -- # udevadm_pid=1155653 00:07:14.444 13:35:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:14.444 13:35:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:14.444 13:35:28 -- pm/common@17 -- # local monitor 00:07:14.444 13:35:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.444 13:35:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.444 13:35:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.444 13:35:28 -- pm/common@21 -- # date +%s 00:07:14.444 13:35:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.444 13:35:28 -- pm/common@21 -- # date +%s 00:07:14.444 13:35:28 -- pm/common@25 -- # sleep 1 00:07:14.444 13:35:28 -- pm/common@21 -- # date +%s 00:07:14.444 13:35:28 -- pm/common@21 -- # date +%s 00:07:14.444 13:35:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718019328 00:07:14.444 13:35:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718019328 00:07:14.444 13:35:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718019328 00:07:14.444 13:35:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718019328 00:07:14.444 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718019328_collect-vmstat.pm.log 00:07:14.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718019328_collect-cpu-load.pm.log 00:07:14.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718019328_collect-cpu-temp.pm.log 00:07:14.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718019328_collect-bmc-pm.bmc.pm.log 00:07:15.643 13:35:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:15.643 13:35:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:15.643 13:35:29 -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:15.643 13:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.643 13:35:29 -- spdk/autotest.sh@59 -- # create_test_list 00:07:15.643 13:35:29 -- common/autotest_common.sh@747 -- # xtrace_disable 00:07:15.643 13:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.643 13:35:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:15.643 13:35:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:15.643 13:35:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:15.643 13:35:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:15.643 13:35:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:15.643 13:35:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:15.643 13:35:29 -- common/autotest_common.sh@1454 -- # uname 00:07:15.643 13:35:29 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:07:15.643 13:35:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:15.643 13:35:29 -- common/autotest_common.sh@1474 -- # uname 00:07:15.643 13:35:29 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:07:15.643 13:35:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:07:15.643 13:35:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:07:15.643 13:35:29 -- spdk/autotest.sh@72 -- # hash lcov 00:07:15.643 13:35:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:15.643 13:35:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:07:15.643 --rc lcov_branch_coverage=1 00:07:15.643 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 ' 00:07:15.643 13:35:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:07:15.643 --rc lcov_branch_coverage=1 00:07:15.643 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 ' 00:07:15.643 13:35:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:07:15.643 --rc lcov_branch_coverage=1 00:07:15.643 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 --no-external' 00:07:15.643 13:35:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:07:15.643 --rc lcov_branch_coverage=1 00:07:15.643 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 --no-external' 00:07:15.643 13:35:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:07:15.643 lcov: LCOV version 1.14 00:07:15.643 13:35:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:30.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:30.526 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:48.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:48.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:07:48.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:48.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:07:48.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:48.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:07:48.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:48.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:07:48.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:48.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:07:48.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:48.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:48.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:48.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:07:48.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:48.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:07:48.628 13:36:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:07:48.628 13:36:02 -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:48.628 13:36:02 -- common/autotest_common.sh@10 -- # set +x 00:07:48.628 13:36:02 -- spdk/autotest.sh@91 -- # rm -f 00:07:48.628 13:36:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:52.820 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:07:52.820 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:07:52.820 13:36:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:07:52.820 13:36:07 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:07:52.820 13:36:07 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:07:52.820 13:36:07 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:07:52.820 13:36:07 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:07:52.820 13:36:07 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:07:52.820 13:36:07 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:07:52.820 13:36:07 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:52.820 13:36:07 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:07:52.820 13:36:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:07:52.820 13:36:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:52.820 13:36:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:52.820 13:36:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:07:52.820 13:36:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:07:52.820 13:36:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:52.820 No valid GPT data, bailing 00:07:52.820 13:36:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:52.820 13:36:07 -- scripts/common.sh@391 -- # pt= 00:07:52.820 13:36:07 -- scripts/common.sh@392 -- # return 1 00:07:52.820 13:36:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:52.820 1+0 records in 00:07:52.820 1+0 records out 00:07:52.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00273799 s, 383 MB/s 00:07:52.820 13:36:07 -- spdk/autotest.sh@118 -- # sync 00:07:52.820 13:36:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:52.820 13:36:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:52.820 13:36:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:00.947 13:36:14 -- spdk/autotest.sh@124 -- # uname -s 00:08:00.947 13:36:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:08:00.947 13:36:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:08:00.947 13:36:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:00.947 13:36:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.947 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.947 ************************************ 00:08:00.947 START TEST setup.sh 00:08:00.947 ************************************ 00:08:00.947 13:36:14 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:08:00.947 * Looking for test storage... 00:08:00.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:08:00.947 13:36:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:08:00.947 13:36:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:00.947 13:36:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:08:00.947 13:36:14 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:00.947 13:36:14 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.947 13:36:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:00.947 ************************************ 00:08:00.947 START TEST acl 00:08:00.947 ************************************ 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:08:00.947 * Looking for test storage... 00:08:00.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:08:00.947 13:36:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:00.947 13:36:14 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:08:00.947 13:36:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:08:00.947 13:36:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:08:00.947 13:36:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:08:00.947 13:36:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:08:00.947 13:36:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:08:00.947 13:36:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:00.947 13:36:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:05.144 13:36:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:08:05.144 13:36:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:08:05.144 13:36:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:08:05.144 13:36:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:05.144 13:36:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:08:05.144 13:36:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:09.344 Hugepages 00:08:09.344 node hugesize free / total 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 00:08:09.344 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.344 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:08:09.345 13:36:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:08:09.345 13:36:23 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:09.345 13:36:23 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.345 13:36:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:09.345 ************************************ 00:08:09.345 START TEST denied 00:08:09.345 ************************************ 00:08:09.345 13:36:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:08:09.345 13:36:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:08:09.345 13:36:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:08:09.345 13:36:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:08:09.345 13:36:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:08:09.345 13:36:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:13.544 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:13.544 13:36:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:18.821 00:08:18.821 real 0m9.625s 00:08:18.821 user 0m2.978s 00:08:18.821 sys 0m5.900s 00:08:18.821 13:36:33 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.821 13:36:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 ************************************ 00:08:18.821 END TEST denied 00:08:18.821 ************************************ 00:08:18.821 13:36:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:18.821 13:36:33 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:18.821 13:36:33 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.821 13:36:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:18.821 ************************************ 00:08:18.821 START TEST allowed 00:08:18.821 ************************************ 00:08:18.821 13:36:33 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:08:18.821 13:36:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:08:18.821 13:36:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:08:18.821 13:36:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:08:18.821 13:36:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:08:18.821 13:36:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:25.394 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:08:25.394 13:36:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:08:25.394 13:36:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:08:25.394 13:36:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:08:25.394 13:36:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:25.394 13:36:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:28.684 00:08:28.684 real 0m9.861s 00:08:28.684 user 0m2.748s 00:08:28.684 sys 0m5.568s 00:08:28.684 13:36:42 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:28.684 13:36:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:08:28.684 ************************************ 00:08:28.684 END TEST allowed 00:08:28.684 ************************************ 00:08:28.684 00:08:28.684 real 0m28.743s 00:08:28.684 user 0m8.992s 00:08:28.684 sys 0m17.780s 00:08:28.684 13:36:43 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:28.684 13:36:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:28.684 ************************************ 00:08:28.684 END TEST acl 00:08:28.684 ************************************ 00:08:28.684 13:36:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:08:28.684 13:36:43 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:28.684 13:36:43 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:28.684 13:36:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:28.684 ************************************ 00:08:28.684 START TEST hugepages 00:08:28.684 ************************************ 00:08:28.684 13:36:43 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:08:28.944 * Looking for test storage... 00:08:28.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:08:28.944 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:28.944 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:28.944 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:28.944 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:28.944 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38959884 kB' 'MemAvailable: 40848756 kB' 'Buffers: 2724 kB' 'Cached: 12786060 kB' 'SwapCached: 308 kB' 'Active: 10215580 kB' 'Inactive: 3197504 kB' 'Active(anon): 9770756 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627324 kB' 'Mapped: 220964 kB' 'Shmem: 10494348 kB' 'KReclaimable: 499160 kB' 'Slab: 1163296 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 664136 kB' 'KernelStack: 22368 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 12690612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.945 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:28.946 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:28.947 13:36:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:28.947 13:36:43 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:28.947 13:36:43 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:28.947 13:36:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:28.947 ************************************ 00:08:28.947 START TEST default_setup 00:08:28.947 ************************************ 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:08:28.947 13:36:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:33.200 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:33.200 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:34.579 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.842 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41158412 kB' 'MemAvailable: 43047284 kB' 'Buffers: 2724 kB' 'Cached: 12786196 kB' 'SwapCached: 308 kB' 'Active: 10233284 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788460 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645112 kB' 'Mapped: 221152 kB' 'Shmem: 10494484 kB' 'KReclaimable: 499160 kB' 'Slab: 1160944 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661784 kB' 'KernelStack: 22240 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12707392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.843 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41161020 kB' 'MemAvailable: 43049892 kB' 'Buffers: 2724 kB' 'Cached: 12786200 kB' 'SwapCached: 308 kB' 'Active: 10233248 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788424 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644772 kB' 'Mapped: 221148 kB' 'Shmem: 10494488 kB' 'KReclaimable: 499160 kB' 'Slab: 1161032 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661872 kB' 'KernelStack: 22432 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12707416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.844 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.845 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41158456 kB' 'MemAvailable: 43047328 kB' 'Buffers: 2724 kB' 'Cached: 12786216 kB' 'SwapCached: 308 kB' 'Active: 10233192 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788368 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644652 kB' 'Mapped: 221148 kB' 'Shmem: 10494504 kB' 'KReclaimable: 499160 kB' 'Slab: 1161024 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661864 kB' 'KernelStack: 22400 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12707436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.846 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.847 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:34.848 nr_hugepages=1024 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:34.848 resv_hugepages=0 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:34.848 surplus_hugepages=0 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:34.848 anon_hugepages=0 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41156680 kB' 'MemAvailable: 43045552 kB' 'Buffers: 2724 kB' 'Cached: 12786240 kB' 'SwapCached: 308 kB' 'Active: 10233308 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788484 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644748 kB' 'Mapped: 221148 kB' 'Shmem: 10494528 kB' 'KReclaimable: 499160 kB' 'Slab: 1161024 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661864 kB' 'KernelStack: 22432 kB' 'PageTables: 9416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.848 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.849 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21986484 kB' 'MemUsed: 10652656 kB' 'SwapCached: 296 kB' 'Active: 6264320 kB' 'Inactive: 1143776 kB' 'Active(anon): 5971192 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975184 kB' 'Mapped: 145800 kB' 'AnonPages: 436104 kB' 'Shmem: 6495228 kB' 'KernelStack: 12344 kB' 'PageTables: 6332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489620 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 313808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.850 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:34.851 node0=1024 expecting 1024 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:34.851 00:08:34.851 real 0m5.983s 00:08:34.851 user 0m1.633s 00:08:34.851 sys 0m2.958s 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.851 13:36:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:08:34.851 ************************************ 00:08:34.851 END TEST default_setup 00:08:34.851 ************************************ 00:08:35.111 13:36:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:35.111 13:36:49 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:35.111 13:36:49 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.111 13:36:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:35.111 ************************************ 00:08:35.111 START TEST per_node_1G_alloc 00:08:35.111 ************************************ 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:35.111 13:36:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:39.312 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:39.312 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41137400 kB' 'MemAvailable: 43026272 kB' 'Buffers: 2724 kB' 'Cached: 12786352 kB' 'SwapCached: 308 kB' 'Active: 10231692 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786868 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642552 kB' 'Mapped: 220380 kB' 'Shmem: 10494640 kB' 'KReclaimable: 499160 kB' 'Slab: 1161240 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662080 kB' 'KernelStack: 22304 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12698328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218968 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.312 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:39.313 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41138344 kB' 'MemAvailable: 43027216 kB' 'Buffers: 2724 kB' 'Cached: 12786356 kB' 'SwapCached: 308 kB' 'Active: 10230860 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786036 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642212 kB' 'Mapped: 220240 kB' 'Shmem: 10494644 kB' 'KReclaimable: 499160 kB' 'Slab: 1161208 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662048 kB' 'KernelStack: 22272 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12698348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218936 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.314 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.315 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41138692 kB' 'MemAvailable: 43027564 kB' 'Buffers: 2724 kB' 'Cached: 12786372 kB' 'SwapCached: 308 kB' 'Active: 10231060 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786236 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642436 kB' 'Mapped: 220240 kB' 'Shmem: 10494660 kB' 'KReclaimable: 499160 kB' 'Slab: 1161208 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662048 kB' 'KernelStack: 22272 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12699492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.316 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.317 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:39.318 nr_hugepages=1024 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:39.318 resv_hugepages=0 00:08:39.318 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:39.318 surplus_hugepages=0 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:39.319 anon_hugepages=0 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41138836 kB' 'MemAvailable: 43027708 kB' 'Buffers: 2724 kB' 'Cached: 12786416 kB' 'SwapCached: 308 kB' 'Active: 10230844 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786020 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642152 kB' 'Mapped: 220240 kB' 'Shmem: 10494704 kB' 'KReclaimable: 499160 kB' 'Slab: 1161208 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662048 kB' 'KernelStack: 22336 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12701256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.319 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:39.320 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23023788 kB' 'MemUsed: 9615352 kB' 'SwapCached: 296 kB' 'Active: 6262536 kB' 'Inactive: 1143776 kB' 'Active(anon): 5969408 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975296 kB' 'Mapped: 144948 kB' 'AnonPages: 434224 kB' 'Shmem: 6495340 kB' 'KernelStack: 12328 kB' 'PageTables: 6132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489332 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 313520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.321 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18115240 kB' 'MemUsed: 9540816 kB' 'SwapCached: 12 kB' 'Active: 3968792 kB' 'Inactive: 2053728 kB' 'Active(anon): 3817096 kB' 'Inactive(anon): 390648 kB' 'Active(file): 151696 kB' 'Inactive(file): 1663080 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5814156 kB' 'Mapped: 75292 kB' 'AnonPages: 208448 kB' 'Shmem: 3999368 kB' 'KernelStack: 9992 kB' 'PageTables: 2860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 323348 kB' 'Slab: 671876 kB' 'SReclaimable: 323348 kB' 'SUnreclaim: 348528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.322 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:39.323 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:39.324 node0=512 expecting 512 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:08:39.324 node1=512 expecting 512 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:39.324 00:08:39.324 real 0m4.359s 00:08:39.324 user 0m1.684s 00:08:39.324 sys 0m2.755s 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.324 13:36:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:39.324 ************************************ 00:08:39.324 END TEST per_node_1G_alloc 00:08:39.324 ************************************ 00:08:39.583 13:36:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:39.583 13:36:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:39.583 13:36:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.583 13:36:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 ************************************ 00:08:39.583 START TEST even_2G_alloc 00:08:39.583 ************************************ 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:39.583 13:36:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:43.785 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:43.785 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41133772 kB' 'MemAvailable: 43022644 kB' 'Buffers: 2724 kB' 'Cached: 12786536 kB' 'SwapCached: 308 kB' 'Active: 10232328 kB' 'Inactive: 3197504 kB' 'Active(anon): 9787504 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643720 kB' 'Mapped: 220380 kB' 'Shmem: 10494824 kB' 'KReclaimable: 499160 kB' 'Slab: 1161216 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662056 kB' 'KernelStack: 22256 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12699564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.785 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:43.786 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41133420 kB' 'MemAvailable: 43022292 kB' 'Buffers: 2724 kB' 'Cached: 12786540 kB' 'SwapCached: 308 kB' 'Active: 10231632 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786808 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642952 kB' 'Mapped: 220256 kB' 'Shmem: 10494828 kB' 'KReclaimable: 499160 kB' 'Slab: 1161244 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662084 kB' 'KernelStack: 22272 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12699584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.787 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.788 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41132420 kB' 'MemAvailable: 43021292 kB' 'Buffers: 2724 kB' 'Cached: 12786556 kB' 'SwapCached: 308 kB' 'Active: 10231652 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786828 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642956 kB' 'Mapped: 220256 kB' 'Shmem: 10494844 kB' 'KReclaimable: 499160 kB' 'Slab: 1161244 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662084 kB' 'KernelStack: 22272 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12699604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.789 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:43.790 nr_hugepages=1024 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:43.790 resv_hugepages=0 00:08:43.790 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:43.790 surplus_hugepages=0 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:43.791 anon_hugepages=0 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41132420 kB' 'MemAvailable: 43021292 kB' 'Buffers: 2724 kB' 'Cached: 12786580 kB' 'SwapCached: 308 kB' 'Active: 10231624 kB' 'Inactive: 3197504 kB' 'Active(anon): 9786800 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642880 kB' 'Mapped: 220256 kB' 'Shmem: 10494868 kB' 'KReclaimable: 499160 kB' 'Slab: 1161244 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662084 kB' 'KernelStack: 22256 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12699628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.791 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:43.792 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22995700 kB' 'MemUsed: 9643440 kB' 'SwapCached: 296 kB' 'Active: 6263528 kB' 'Inactive: 1143776 kB' 'Active(anon): 5970400 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975412 kB' 'Mapped: 144964 kB' 'AnonPages: 435212 kB' 'Shmem: 6495456 kB' 'KernelStack: 12280 kB' 'PageTables: 6148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489424 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 313612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.793 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18136468 kB' 'MemUsed: 9519588 kB' 'SwapCached: 12 kB' 'Active: 3967832 kB' 'Inactive: 2053728 kB' 'Active(anon): 3816136 kB' 'Inactive(anon): 390648 kB' 'Active(file): 151696 kB' 'Inactive(file): 1663080 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5814240 kB' 'Mapped: 75292 kB' 'AnonPages: 207356 kB' 'Shmem: 3999452 kB' 'KernelStack: 9976 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 323348 kB' 'Slab: 671820 kB' 'SReclaimable: 323348 kB' 'SUnreclaim: 348472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.794 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:43.795 node0=512 expecting 512 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:08:43.795 node1=512 expecting 512 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:43.795 00:08:43.795 real 0m4.329s 00:08:43.795 user 0m1.612s 00:08:43.795 sys 0m2.798s 00:08:43.795 13:36:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:43.796 13:36:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:43.796 ************************************ 00:08:43.796 END TEST even_2G_alloc 00:08:43.796 ************************************ 00:08:43.796 13:36:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:43.796 13:36:58 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:43.796 13:36:58 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:43.796 13:36:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:43.796 ************************************ 00:08:43.796 START TEST odd_alloc 00:08:43.796 ************************************ 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:08:43.796 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.055 13:36:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:48.256 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:48.256 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:48.256 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41132528 kB' 'MemAvailable: 43021400 kB' 'Buffers: 2724 kB' 'Cached: 12786712 kB' 'SwapCached: 308 kB' 'Active: 10239448 kB' 'Inactive: 3197504 kB' 'Active(anon): 9794624 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650148 kB' 'Mapped: 220836 kB' 'Shmem: 10495000 kB' 'KReclaimable: 499160 kB' 'Slab: 1161468 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662308 kB' 'KernelStack: 22272 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12706660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.257 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41132336 kB' 'MemAvailable: 43021208 kB' 'Buffers: 2724 kB' 'Cached: 12786716 kB' 'SwapCached: 308 kB' 'Active: 10233160 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788336 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644340 kB' 'Mapped: 220680 kB' 'Shmem: 10495004 kB' 'KReclaimable: 499160 kB' 'Slab: 1161460 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662300 kB' 'KernelStack: 22272 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12700560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.258 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.259 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41133304 kB' 'MemAvailable: 43022176 kB' 'Buffers: 2724 kB' 'Cached: 12786732 kB' 'SwapCached: 308 kB' 'Active: 10233092 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788268 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644260 kB' 'Mapped: 220268 kB' 'Shmem: 10495020 kB' 'KReclaimable: 499160 kB' 'Slab: 1161460 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662300 kB' 'KernelStack: 22272 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12700580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.260 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.261 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:48.262 nr_hugepages=1025 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:48.262 resv_hugepages=0 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:48.262 surplus_hugepages=0 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:48.262 anon_hugepages=0 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41133256 kB' 'MemAvailable: 43022128 kB' 'Buffers: 2724 kB' 'Cached: 12786772 kB' 'SwapCached: 308 kB' 'Active: 10232768 kB' 'Inactive: 3197504 kB' 'Active(anon): 9787944 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643864 kB' 'Mapped: 220268 kB' 'Shmem: 10495060 kB' 'KReclaimable: 499160 kB' 'Slab: 1161460 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662300 kB' 'KernelStack: 22256 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12700600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.262 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.263 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23007752 kB' 'MemUsed: 9631388 kB' 'SwapCached: 296 kB' 'Active: 6264984 kB' 'Inactive: 1143776 kB' 'Active(anon): 5971856 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975500 kB' 'Mapped: 144976 kB' 'AnonPages: 436516 kB' 'Shmem: 6495544 kB' 'KernelStack: 12296 kB' 'PageTables: 6188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489704 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 313892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.264 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18125252 kB' 'MemUsed: 9530804 kB' 'SwapCached: 12 kB' 'Active: 3968028 kB' 'Inactive: 2053728 kB' 'Active(anon): 3816332 kB' 'Inactive(anon): 390648 kB' 'Active(file): 151696 kB' 'Inactive(file): 1663080 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5814324 kB' 'Mapped: 75292 kB' 'AnonPages: 207568 kB' 'Shmem: 3999536 kB' 'KernelStack: 9960 kB' 'PageTables: 2756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 323348 kB' 'Slab: 671756 kB' 'SReclaimable: 323348 kB' 'SUnreclaim: 348408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.265 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.266 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:08:48.267 node0=512 expecting 513 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:08:48.267 node1=513 expecting 512 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:08:48.267 00:08:48.267 real 0m4.416s 00:08:48.267 user 0m1.568s 00:08:48.267 sys 0m2.933s 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:48.267 13:37:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:48.267 ************************************ 00:08:48.267 END TEST odd_alloc 00:08:48.267 ************************************ 00:08:48.267 13:37:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:48.267 13:37:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:48.267 13:37:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:48.267 13:37:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:48.526 ************************************ 00:08:48.526 START TEST custom_alloc 00:08:48.526 ************************************ 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:48.526 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:48.527 13:37:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:52.724 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:52.724 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40058408 kB' 'MemAvailable: 41947280 kB' 'Buffers: 2724 kB' 'Cached: 12786868 kB' 'SwapCached: 308 kB' 'Active: 10235464 kB' 'Inactive: 3197504 kB' 'Active(anon): 9790640 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645824 kB' 'Mapped: 220416 kB' 'Shmem: 10495156 kB' 'KReclaimable: 499160 kB' 'Slab: 1161408 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662248 kB' 'KernelStack: 22304 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12701212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219080 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.724 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.725 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40060564 kB' 'MemAvailable: 41949436 kB' 'Buffers: 2724 kB' 'Cached: 12786872 kB' 'SwapCached: 308 kB' 'Active: 10233620 kB' 'Inactive: 3197504 kB' 'Active(anon): 9788796 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644476 kB' 'Mapped: 220276 kB' 'Shmem: 10495160 kB' 'KReclaimable: 499160 kB' 'Slab: 1161396 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662236 kB' 'KernelStack: 22256 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12701660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219048 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.726 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.727 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:52.728 13:37:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40057168 kB' 'MemAvailable: 41946040 kB' 'Buffers: 2724 kB' 'Cached: 12786884 kB' 'SwapCached: 308 kB' 'Active: 10237876 kB' 'Inactive: 3197504 kB' 'Active(anon): 9793052 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648760 kB' 'Mapped: 220780 kB' 'Shmem: 10495172 kB' 'KReclaimable: 499160 kB' 'Slab: 1161396 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662236 kB' 'KernelStack: 22272 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12705908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219032 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.728 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.729 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:52.730 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:08:52.731 nr_hugepages=1536 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:52.731 resv_hugepages=0 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:52.731 surplus_hugepages=0 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:52.731 anon_hugepages=0 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40058792 kB' 'MemAvailable: 41947664 kB' 'Buffers: 2724 kB' 'Cached: 12786908 kB' 'SwapCached: 308 kB' 'Active: 10234280 kB' 'Inactive: 3197504 kB' 'Active(anon): 9789456 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645116 kB' 'Mapped: 221064 kB' 'Shmem: 10495196 kB' 'KReclaimable: 499160 kB' 'Slab: 1161396 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662236 kB' 'KernelStack: 22272 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12701996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219064 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.731 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:52.732 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23004596 kB' 'MemUsed: 9634544 kB' 'SwapCached: 296 kB' 'Active: 6264840 kB' 'Inactive: 1143776 kB' 'Active(anon): 5971712 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975512 kB' 'Mapped: 144984 kB' 'AnonPages: 436292 kB' 'Shmem: 6495556 kB' 'KernelStack: 12248 kB' 'PageTables: 5980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489820 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 314008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.733 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 17043696 kB' 'MemUsed: 10612360 kB' 'SwapCached: 12 kB' 'Active: 3968800 kB' 'Inactive: 2053728 kB' 'Active(anon): 3817104 kB' 'Inactive(anon): 390648 kB' 'Active(file): 151696 kB' 'Inactive(file): 1663080 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5814472 kB' 'Mapped: 75476 kB' 'AnonPages: 208120 kB' 'Shmem: 3999684 kB' 'KernelStack: 9992 kB' 'PageTables: 2840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 323348 kB' 'Slab: 671576 kB' 'SReclaimable: 323348 kB' 'SUnreclaim: 348228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.734 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.735 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:52.736 node0=512 expecting 512 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:08:52.736 node1=1024 expecting 1024 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:08:52.736 00:08:52.736 real 0m4.400s 00:08:52.736 user 0m1.638s 00:08:52.736 sys 0m2.846s 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:52.736 13:37:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:52.736 ************************************ 00:08:52.736 END TEST custom_alloc 00:08:52.736 ************************************ 00:08:52.736 13:37:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:52.736 13:37:07 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:52.736 13:37:07 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:52.736 13:37:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:52.996 ************************************ 00:08:52.996 START TEST no_shrink_alloc 00:08:52.996 ************************************ 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:52.996 13:37:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:57.197 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:57.197 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41095372 kB' 'MemAvailable: 42984244 kB' 'Buffers: 2724 kB' 'Cached: 12787040 kB' 'SwapCached: 308 kB' 'Active: 10236952 kB' 'Inactive: 3197504 kB' 'Active(anon): 9792128 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647964 kB' 'Mapped: 220316 kB' 'Shmem: 10495328 kB' 'KReclaimable: 499160 kB' 'Slab: 1160712 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661552 kB' 'KernelStack: 22512 kB' 'PageTables: 9452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219112 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.197 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.198 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41095172 kB' 'MemAvailable: 42984044 kB' 'Buffers: 2724 kB' 'Cached: 12787044 kB' 'SwapCached: 308 kB' 'Active: 10236760 kB' 'Inactive: 3197504 kB' 'Active(anon): 9791936 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647744 kB' 'Mapped: 220308 kB' 'Shmem: 10495332 kB' 'KReclaimable: 499160 kB' 'Slab: 1160668 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661508 kB' 'KernelStack: 22544 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219064 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.199 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.200 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41094932 kB' 'MemAvailable: 42983804 kB' 'Buffers: 2724 kB' 'Cached: 12787064 kB' 'SwapCached: 308 kB' 'Active: 10237168 kB' 'Inactive: 3197504 kB' 'Active(anon): 9792344 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648180 kB' 'Mapped: 220316 kB' 'Shmem: 10495352 kB' 'KReclaimable: 499160 kB' 'Slab: 1160724 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661564 kB' 'KernelStack: 22576 kB' 'PageTables: 9828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219144 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.201 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.202 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:57.203 nr_hugepages=1024 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:57.203 resv_hugepages=0 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:57.203 surplus_hugepages=0 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:57.203 anon_hugepages=0 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41094692 kB' 'MemAvailable: 42983564 kB' 'Buffers: 2724 kB' 'Cached: 12787064 kB' 'SwapCached: 308 kB' 'Active: 10237064 kB' 'Inactive: 3197504 kB' 'Active(anon): 9792240 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648132 kB' 'Mapped: 220308 kB' 'Shmem: 10495352 kB' 'KReclaimable: 499160 kB' 'Slab: 1160724 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 661564 kB' 'KernelStack: 22544 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219032 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.203 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.204 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21962104 kB' 'MemUsed: 10677036 kB' 'SwapCached: 296 kB' 'Active: 6265476 kB' 'Inactive: 1143776 kB' 'Active(anon): 5972348 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975544 kB' 'Mapped: 144996 kB' 'AnonPages: 437576 kB' 'Shmem: 6495588 kB' 'KernelStack: 12488 kB' 'PageTables: 6660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489188 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 313376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.205 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:57.206 node0=1024 expecting 1024 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:57.206 13:37:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:01.471 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:01.471 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:01.471 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41097532 kB' 'MemAvailable: 42986404 kB' 'Buffers: 2724 kB' 'Cached: 12787188 kB' 'SwapCached: 308 kB' 'Active: 10238240 kB' 'Inactive: 3197504 kB' 'Active(anon): 9793416 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648440 kB' 'Mapped: 220308 kB' 'Shmem: 10495476 kB' 'KReclaimable: 499160 kB' 'Slab: 1161436 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662276 kB' 'KernelStack: 22352 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12704668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.471 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.472 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41098248 kB' 'MemAvailable: 42987120 kB' 'Buffers: 2724 kB' 'Cached: 12787192 kB' 'SwapCached: 308 kB' 'Active: 10236976 kB' 'Inactive: 3197504 kB' 'Active(anon): 9792152 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648216 kB' 'Mapped: 220216 kB' 'Shmem: 10495480 kB' 'KReclaimable: 499160 kB' 'Slab: 1161400 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662240 kB' 'KernelStack: 22304 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12704444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.473 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.474 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41094484 kB' 'MemAvailable: 42983356 kB' 'Buffers: 2724 kB' 'Cached: 12787196 kB' 'SwapCached: 308 kB' 'Active: 10238044 kB' 'Inactive: 3197504 kB' 'Active(anon): 9793220 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648764 kB' 'Mapped: 220216 kB' 'Shmem: 10495484 kB' 'KReclaimable: 499160 kB' 'Slab: 1161400 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662240 kB' 'KernelStack: 22352 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12723960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.475 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.476 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:09:01.477 nr_hugepages=1024 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:01.477 resv_hugepages=0 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:01.477 surplus_hugepages=0 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:01.477 anon_hugepages=0 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41098236 kB' 'MemAvailable: 42987108 kB' 'Buffers: 2724 kB' 'Cached: 12787228 kB' 'SwapCached: 308 kB' 'Active: 10237404 kB' 'Inactive: 3197504 kB' 'Active(anon): 9792580 kB' 'Inactive(anon): 1347892 kB' 'Active(file): 444824 kB' 'Inactive(file): 1849612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648060 kB' 'Mapped: 220216 kB' 'Shmem: 10495516 kB' 'KReclaimable: 499160 kB' 'Slab: 1161404 kB' 'SReclaimable: 499160 kB' 'SUnreclaim: 662244 kB' 'KernelStack: 22368 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4332916 kB' 'DirectMap2M: 57219072 kB' 'DirectMap1G: 7340032 kB' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.477 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:01.478 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21964996 kB' 'MemUsed: 10674144 kB' 'SwapCached: 296 kB' 'Active: 6266944 kB' 'Inactive: 1143776 kB' 'Active(anon): 5973816 kB' 'Inactive(anon): 957244 kB' 'Active(file): 293128 kB' 'Inactive(file): 186532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6975568 kB' 'Mapped: 145000 kB' 'AnonPages: 438332 kB' 'Shmem: 6495612 kB' 'KernelStack: 12376 kB' 'PageTables: 5948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175812 kB' 'Slab: 489592 kB' 'SReclaimable: 175812 kB' 'SUnreclaim: 313780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.479 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:09:01.480 node0=1024 expecting 1024 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:09:01.480 00:09:01.480 real 0m8.390s 00:09:01.480 user 0m3.036s 00:09:01.480 sys 0m5.482s 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:01.480 13:37:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:09:01.480 ************************************ 00:09:01.480 END TEST no_shrink_alloc 00:09:01.480 ************************************ 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:09:01.480 13:37:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:09:01.480 00:09:01.480 real 0m32.572s 00:09:01.480 user 0m11.428s 00:09:01.480 sys 0m20.261s 00:09:01.480 13:37:15 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:01.480 13:37:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:09:01.480 ************************************ 00:09:01.480 END TEST hugepages 00:09:01.480 ************************************ 00:09:01.480 13:37:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:09:01.480 13:37:15 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:01.480 13:37:15 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:01.480 13:37:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:01.480 ************************************ 00:09:01.480 START TEST driver 00:09:01.480 ************************************ 00:09:01.480 13:37:15 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:09:01.480 * Looking for test storage... 00:09:01.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:09:01.480 13:37:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:09:01.480 13:37:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:01.480 13:37:15 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:08.050 13:37:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:09:08.050 13:37:21 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:08.050 13:37:21 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:08.050 13:37:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:09:08.050 ************************************ 00:09:08.050 START TEST guess_driver 00:09:08.050 ************************************ 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:09:08.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:09:08.050 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:09:08.051 Looking for driver=vfio-pci 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:09:08.051 13:37:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.341 13:37:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:13.246 13:37:27 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:18.522 00:09:18.522 real 0m11.446s 00:09:18.522 user 0m3.064s 00:09:18.522 sys 0m6.070s 00:09:18.522 13:37:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:18.522 13:37:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:09:18.522 ************************************ 00:09:18.522 END TEST guess_driver 00:09:18.522 ************************************ 00:09:18.782 00:09:18.782 real 0m17.257s 00:09:18.782 user 0m4.714s 00:09:18.782 sys 0m9.387s 00:09:18.782 13:37:33 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:18.782 13:37:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:09:18.782 ************************************ 00:09:18.782 END TEST driver 00:09:18.782 ************************************ 00:09:18.782 13:37:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:09:18.782 13:37:33 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:18.782 13:37:33 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:18.782 13:37:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:18.782 ************************************ 00:09:18.782 START TEST devices 00:09:18.782 ************************************ 00:09:18.782 13:37:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:09:18.782 * Looking for test storage... 00:09:18.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:09:18.782 13:37:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:09:18.782 13:37:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:09:18.782 13:37:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:18.782 13:37:33 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:09:24.058 13:37:37 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:09:24.058 No valid GPT data, bailing 00:09:24.058 13:37:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:09:24.058 13:37:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:24.058 13:37:37 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:09:24.058 13:37:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:24.058 13:37:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:24.058 ************************************ 00:09:24.058 START TEST nvme_mount 00:09:24.058 ************************************ 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:24.059 13:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:09:24.318 Creating new GPT entries in memory. 00:09:24.318 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:24.318 other utilities. 00:09:24.318 13:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:09:24.318 13:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:24.318 13:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:24.318 13:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:24.318 13:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:09:25.697 Creating new GPT entries in memory. 00:09:25.697 The operation has completed successfully. 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1197489 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:25.697 13:37:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:29.893 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:29.893 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:29.894 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:29.894 13:37:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:29.894 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:09:29.894 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:09:29.894 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:29.894 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:29.894 13:37:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:34.087 13:37:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:38.279 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:38.279 00:09:38.279 real 0m14.545s 00:09:38.279 user 0m4.132s 00:09:38.279 sys 0m8.257s 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:38.279 13:37:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:09:38.279 ************************************ 00:09:38.279 END TEST nvme_mount 00:09:38.279 ************************************ 00:09:38.279 13:37:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:09:38.279 13:37:52 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:38.279 13:37:52 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:38.279 13:37:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:38.279 ************************************ 00:09:38.279 START TEST dm_mount 00:09:38.279 ************************************ 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:38.279 13:37:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:09:39.217 Creating new GPT entries in memory. 00:09:39.217 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:39.217 other utilities. 00:09:39.217 13:37:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:09:39.217 13:37:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:39.217 13:37:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:39.217 13:37:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:39.217 13:37:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:09:40.155 Creating new GPT entries in memory. 00:09:40.155 The operation has completed successfully. 00:09:40.155 13:37:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:40.155 13:37:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:40.155 13:37:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:40.155 13:37:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:40.155 13:37:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:09:41.092 The operation has completed successfully. 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1202743 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:41.092 13:37:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:09:45.374 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:45.375 13:37:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:09:49.570 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:09:49.570 00:09:49.570 real 0m11.496s 00:09:49.570 user 0m2.977s 00:09:49.570 sys 0m5.627s 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:49.570 13:38:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 ************************************ 00:09:49.570 END TEST dm_mount 00:09:49.570 ************************************ 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:49.570 13:38:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:49.830 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:09:49.830 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:09:49.830 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:49.830 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:09:49.830 13:38:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:09:49.830 00:09:49.830 real 0m31.067s 00:09:49.830 user 0m8.760s 00:09:49.830 sys 0m17.116s 00:09:49.830 13:38:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:49.830 13:38:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:49.830 ************************************ 00:09:49.830 END TEST devices 00:09:49.830 ************************************ 00:09:49.830 00:09:49.830 real 1m50.093s 00:09:49.830 user 0m34.057s 00:09:49.830 sys 1m4.873s 00:09:49.830 13:38:04 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:49.830 13:38:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:49.830 ************************************ 00:09:49.830 END TEST setup.sh 00:09:49.830 ************************************ 00:09:49.831 13:38:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:54.024 Hugepages 00:09:54.024 node hugesize free / total 00:09:54.024 node0 1048576kB 0 / 0 00:09:54.024 node0 2048kB 2048 / 2048 00:09:54.024 node1 1048576kB 0 / 0 00:09:54.024 node1 2048kB 0 / 0 00:09:54.024 00:09:54.025 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:54.025 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:09:54.025 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:09:54.025 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:09:54.025 13:38:08 -- spdk/autotest.sh@130 -- # uname -s 00:09:54.025 13:38:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:09:54.025 13:38:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:09:54.025 13:38:08 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:58.219 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:58.219 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:00.127 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:10:00.127 13:38:14 -- common/autotest_common.sh@1531 -- # sleep 1 00:10:01.066 13:38:15 -- common/autotest_common.sh@1532 -- # bdfs=() 00:10:01.066 13:38:15 -- common/autotest_common.sh@1532 -- # local bdfs 00:10:01.066 13:38:15 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:10:01.066 13:38:15 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:10:01.066 13:38:15 -- common/autotest_common.sh@1512 -- # bdfs=() 00:10:01.066 13:38:15 -- common/autotest_common.sh@1512 -- # local bdfs 00:10:01.066 13:38:15 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:01.066 13:38:15 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:01.066 13:38:15 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:10:01.066 13:38:15 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:10:01.066 13:38:15 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:10:01.066 13:38:15 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:10:05.259 Waiting for block devices as requested 00:10:05.259 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:05.259 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:05.259 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:05.259 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:05.519 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:05.519 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:05.519 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:05.779 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:05.779 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:05.779 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:06.038 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:06.038 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:06.038 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:06.298 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:06.298 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:06.298 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:06.558 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:10:06.558 13:38:20 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:10:06.558 13:38:20 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:10:06.558 13:38:20 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:10:06.558 13:38:20 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:10:06.558 13:38:20 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1544 -- # grep oacs 00:10:06.558 13:38:20 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:10:06.558 13:38:20 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:10:06.558 13:38:20 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:10:06.558 13:38:20 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:10:06.558 13:38:20 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:10:06.558 13:38:20 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:10:06.558 13:38:20 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:10:06.558 13:38:20 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:10:06.558 13:38:20 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:10:06.558 13:38:20 -- common/autotest_common.sh@1556 -- # continue 00:10:06.558 13:38:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:10:06.558 13:38:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:06.558 13:38:20 -- common/autotest_common.sh@10 -- # set +x 00:10:06.817 13:38:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:10:06.817 13:38:21 -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:06.817 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:10:06.817 13:38:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:11.009 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:11.009 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:12.389 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:10:12.389 13:38:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:10:12.389 13:38:26 -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:12.389 13:38:26 -- common/autotest_common.sh@10 -- # set +x 00:10:12.389 13:38:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:10:12.389 13:38:26 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:10:12.389 13:38:26 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:10:12.389 13:38:26 -- common/autotest_common.sh@1576 -- # bdfs=() 00:10:12.389 13:38:26 -- common/autotest_common.sh@1576 -- # local bdfs 00:10:12.389 13:38:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:10:12.389 13:38:26 -- common/autotest_common.sh@1512 -- # bdfs=() 00:10:12.389 13:38:26 -- common/autotest_common.sh@1512 -- # local bdfs 00:10:12.389 13:38:26 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:12.647 13:38:26 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:12.647 13:38:26 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:10:12.647 13:38:26 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:10:12.647 13:38:26 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:10:12.647 13:38:26 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:10:12.647 13:38:26 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:10:12.647 13:38:26 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:10:12.647 13:38:26 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:10:12.647 13:38:26 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:10:12.647 13:38:26 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:d8:00.0 00:10:12.647 13:38:26 -- common/autotest_common.sh@1591 -- # [[ -z 0000:d8:00.0 ]] 00:10:12.647 13:38:26 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1214465 00:10:12.647 13:38:26 -- common/autotest_common.sh@1597 -- # waitforlisten 1214465 00:10:12.647 13:38:26 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:12.647 13:38:26 -- common/autotest_common.sh@830 -- # '[' -z 1214465 ']' 00:10:12.647 13:38:26 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.647 13:38:26 -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:12.647 13:38:26 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.647 13:38:26 -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:12.647 13:38:26 -- common/autotest_common.sh@10 -- # set +x 00:10:12.647 [2024-06-10 13:38:27.039654] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:12.647 [2024-06-10 13:38:27.039720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214465 ] 00:10:12.647 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.906 [2024-06-10 13:38:27.157967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.906 [2024-06-10 13:38:27.243146] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.472 13:38:27 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:13.472 13:38:27 -- common/autotest_common.sh@863 -- # return 0 00:10:13.472 13:38:27 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:10:13.472 13:38:27 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:10:13.472 13:38:27 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:10:16.758 nvme0n1 00:10:16.758 13:38:31 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:10:17.017 [2024-06-10 13:38:31.231635] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:10:17.017 request: 00:10:17.017 { 00:10:17.017 "nvme_ctrlr_name": "nvme0", 00:10:17.017 "password": "test", 00:10:17.017 "method": "bdev_nvme_opal_revert", 00:10:17.017 "req_id": 1 00:10:17.017 } 00:10:17.017 Got JSON-RPC error response 00:10:17.017 response: 00:10:17.017 { 00:10:17.017 "code": -32602, 00:10:17.017 "message": "Invalid parameters" 00:10:17.017 } 00:10:17.017 13:38:31 -- common/autotest_common.sh@1603 -- # true 00:10:17.017 13:38:31 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:10:17.017 13:38:31 -- common/autotest_common.sh@1607 -- # killprocess 1214465 00:10:17.017 13:38:31 -- common/autotest_common.sh@949 -- # '[' -z 1214465 ']' 00:10:17.017 13:38:31 -- common/autotest_common.sh@953 -- # kill -0 1214465 00:10:17.017 13:38:31 -- common/autotest_common.sh@954 -- # uname 00:10:17.017 13:38:31 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:17.017 13:38:31 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1214465 00:10:17.017 13:38:31 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:17.017 13:38:31 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:17.017 13:38:31 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1214465' 00:10:17.017 killing process with pid 1214465 00:10:17.017 13:38:31 -- common/autotest_common.sh@968 -- # kill 1214465 00:10:17.017 13:38:31 -- common/autotest_common.sh@973 -- # wait 1214465 00:10:19.560 13:38:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:10:19.560 13:38:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:10:19.560 13:38:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:19.560 13:38:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:19.560 13:38:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:10:19.560 13:38:33 -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:19.560 13:38:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.560 13:38:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:10:19.560 13:38:33 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:10:19.560 13:38:33 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:19.560 13:38:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:19.560 13:38:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.560 ************************************ 00:10:19.560 START TEST env 00:10:19.560 ************************************ 00:10:19.560 13:38:33 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:10:19.560 * Looking for test storage... 00:10:19.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:10:19.560 13:38:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:10:19.560 13:38:33 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:19.560 13:38:33 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:19.560 13:38:33 env -- common/autotest_common.sh@10 -- # set +x 00:10:19.560 ************************************ 00:10:19.560 START TEST env_memory 00:10:19.560 ************************************ 00:10:19.560 13:38:33 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:10:19.560 00:10:19.560 00:10:19.560 CUnit - A unit testing framework for C - Version 2.1-3 00:10:19.560 http://cunit.sourceforge.net/ 00:10:19.560 00:10:19.560 00:10:19.560 Suite: memory 00:10:19.560 Test: alloc and free memory map ...[2024-06-10 13:38:33.746351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:19.560 passed 00:10:19.560 Test: mem map translation ...[2024-06-10 13:38:33.773142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:19.560 [2024-06-10 13:38:33.773164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:19.560 [2024-06-10 13:38:33.773216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:19.560 [2024-06-10 13:38:33.773228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:19.560 passed 00:10:19.560 Test: mem map registration ...[2024-06-10 13:38:33.826273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:19.560 [2024-06-10 13:38:33.826294] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:19.560 passed 00:10:19.560 Test: mem map adjacent registrations ...passed 00:10:19.560 00:10:19.560 Run Summary: Type Total Ran Passed Failed Inactive 00:10:19.560 suites 1 1 n/a 0 0 00:10:19.560 tests 4 4 4 0 0 00:10:19.560 asserts 152 152 152 0 n/a 00:10:19.560 00:10:19.560 Elapsed time = 0.185 seconds 00:10:19.560 00:10:19.560 real 0m0.200s 00:10:19.560 user 0m0.182s 00:10:19.560 sys 0m0.017s 00:10:19.560 13:38:33 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:19.560 13:38:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:19.560 ************************************ 00:10:19.560 END TEST env_memory 00:10:19.560 ************************************ 00:10:19.560 13:38:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:19.560 13:38:33 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:19.560 13:38:33 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:19.560 13:38:33 env -- common/autotest_common.sh@10 -- # set +x 00:10:19.560 ************************************ 00:10:19.560 START TEST env_vtophys 00:10:19.560 ************************************ 00:10:19.560 13:38:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:19.560 EAL: lib.eal log level changed from notice to debug 00:10:19.560 EAL: Detected lcore 0 as core 0 on socket 0 00:10:19.560 EAL: Detected lcore 1 as core 1 on socket 0 00:10:19.560 EAL: Detected lcore 2 as core 2 on socket 0 00:10:19.560 EAL: Detected lcore 3 as core 3 on socket 0 00:10:19.560 EAL: Detected lcore 4 as core 4 on socket 0 00:10:19.560 EAL: Detected lcore 5 as core 5 on socket 0 00:10:19.560 EAL: Detected lcore 6 as core 6 on socket 0 00:10:19.560 EAL: Detected lcore 7 as core 8 on socket 0 00:10:19.560 EAL: Detected lcore 8 as core 9 on socket 0 00:10:19.560 EAL: Detected lcore 9 as core 10 on socket 0 00:10:19.560 EAL: Detected lcore 10 as core 11 on socket 0 00:10:19.560 EAL: Detected lcore 11 as core 12 on socket 0 00:10:19.560 EAL: Detected lcore 12 as core 13 on socket 0 00:10:19.560 EAL: Detected lcore 13 as core 14 on socket 0 00:10:19.560 EAL: Detected lcore 14 as core 16 on socket 0 00:10:19.560 EAL: Detected lcore 15 as core 17 on socket 0 00:10:19.560 EAL: Detected lcore 16 as core 18 on socket 0 00:10:19.560 EAL: Detected lcore 17 as core 19 on socket 0 00:10:19.560 EAL: Detected lcore 18 as core 20 on socket 0 00:10:19.560 EAL: Detected lcore 19 as core 21 on socket 0 00:10:19.560 EAL: Detected lcore 20 as core 22 on socket 0 00:10:19.560 EAL: Detected lcore 21 as core 24 on socket 0 00:10:19.560 EAL: Detected lcore 22 as core 25 on socket 0 00:10:19.560 EAL: Detected lcore 23 as core 26 on socket 0 00:10:19.560 EAL: Detected lcore 24 as core 27 on socket 0 00:10:19.560 EAL: Detected lcore 25 as core 28 on socket 0 00:10:19.560 EAL: Detected lcore 26 as core 29 on socket 0 00:10:19.560 EAL: Detected lcore 27 as core 30 on socket 0 00:10:19.560 EAL: Detected lcore 28 as core 0 on socket 1 00:10:19.560 EAL: Detected lcore 29 as core 1 on socket 1 00:10:19.560 EAL: Detected lcore 30 as core 2 on socket 1 00:10:19.560 EAL: Detected lcore 31 as core 3 on socket 1 00:10:19.560 EAL: Detected lcore 32 as core 4 on socket 1 00:10:19.560 EAL: Detected lcore 33 as core 5 on socket 1 00:10:19.560 EAL: Detected lcore 34 as core 6 on socket 1 00:10:19.560 EAL: Detected lcore 35 as core 8 on socket 1 00:10:19.560 EAL: Detected lcore 36 as core 9 on socket 1 00:10:19.560 EAL: Detected lcore 37 as core 10 on socket 1 00:10:19.560 EAL: Detected lcore 38 as core 11 on socket 1 00:10:19.560 EAL: Detected lcore 39 as core 12 on socket 1 00:10:19.560 EAL: Detected lcore 40 as core 13 on socket 1 00:10:19.560 EAL: Detected lcore 41 as core 14 on socket 1 00:10:19.560 EAL: Detected lcore 42 as core 16 on socket 1 00:10:19.560 EAL: Detected lcore 43 as core 17 on socket 1 00:10:19.560 EAL: Detected lcore 44 as core 18 on socket 1 00:10:19.560 EAL: Detected lcore 45 as core 19 on socket 1 00:10:19.560 EAL: Detected lcore 46 as core 20 on socket 1 00:10:19.560 EAL: Detected lcore 47 as core 21 on socket 1 00:10:19.560 EAL: Detected lcore 48 as core 22 on socket 1 00:10:19.560 EAL: Detected lcore 49 as core 24 on socket 1 00:10:19.560 EAL: Detected lcore 50 as core 25 on socket 1 00:10:19.561 EAL: Detected lcore 51 as core 26 on socket 1 00:10:19.561 EAL: Detected lcore 52 as core 27 on socket 1 00:10:19.561 EAL: Detected lcore 53 as core 28 on socket 1 00:10:19.561 EAL: Detected lcore 54 as core 29 on socket 1 00:10:19.561 EAL: Detected lcore 55 as core 30 on socket 1 00:10:19.561 EAL: Detected lcore 56 as core 0 on socket 0 00:10:19.561 EAL: Detected lcore 57 as core 1 on socket 0 00:10:19.561 EAL: Detected lcore 58 as core 2 on socket 0 00:10:19.561 EAL: Detected lcore 59 as core 3 on socket 0 00:10:19.561 EAL: Detected lcore 60 as core 4 on socket 0 00:10:19.561 EAL: Detected lcore 61 as core 5 on socket 0 00:10:19.561 EAL: Detected lcore 62 as core 6 on socket 0 00:10:19.561 EAL: Detected lcore 63 as core 8 on socket 0 00:10:19.561 EAL: Detected lcore 64 as core 9 on socket 0 00:10:19.561 EAL: Detected lcore 65 as core 10 on socket 0 00:10:19.561 EAL: Detected lcore 66 as core 11 on socket 0 00:10:19.561 EAL: Detected lcore 67 as core 12 on socket 0 00:10:19.561 EAL: Detected lcore 68 as core 13 on socket 0 00:10:19.561 EAL: Detected lcore 69 as core 14 on socket 0 00:10:19.561 EAL: Detected lcore 70 as core 16 on socket 0 00:10:19.561 EAL: Detected lcore 71 as core 17 on socket 0 00:10:19.561 EAL: Detected lcore 72 as core 18 on socket 0 00:10:19.561 EAL: Detected lcore 73 as core 19 on socket 0 00:10:19.561 EAL: Detected lcore 74 as core 20 on socket 0 00:10:19.561 EAL: Detected lcore 75 as core 21 on socket 0 00:10:19.561 EAL: Detected lcore 76 as core 22 on socket 0 00:10:19.561 EAL: Detected lcore 77 as core 24 on socket 0 00:10:19.561 EAL: Detected lcore 78 as core 25 on socket 0 00:10:19.561 EAL: Detected lcore 79 as core 26 on socket 0 00:10:19.561 EAL: Detected lcore 80 as core 27 on socket 0 00:10:19.561 EAL: Detected lcore 81 as core 28 on socket 0 00:10:19.561 EAL: Detected lcore 82 as core 29 on socket 0 00:10:19.561 EAL: Detected lcore 83 as core 30 on socket 0 00:10:19.561 EAL: Detected lcore 84 as core 0 on socket 1 00:10:19.561 EAL: Detected lcore 85 as core 1 on socket 1 00:10:19.561 EAL: Detected lcore 86 as core 2 on socket 1 00:10:19.561 EAL: Detected lcore 87 as core 3 on socket 1 00:10:19.561 EAL: Detected lcore 88 as core 4 on socket 1 00:10:19.561 EAL: Detected lcore 89 as core 5 on socket 1 00:10:19.561 EAL: Detected lcore 90 as core 6 on socket 1 00:10:19.561 EAL: Detected lcore 91 as core 8 on socket 1 00:10:19.561 EAL: Detected lcore 92 as core 9 on socket 1 00:10:19.561 EAL: Detected lcore 93 as core 10 on socket 1 00:10:19.561 EAL: Detected lcore 94 as core 11 on socket 1 00:10:19.561 EAL: Detected lcore 95 as core 12 on socket 1 00:10:19.561 EAL: Detected lcore 96 as core 13 on socket 1 00:10:19.561 EAL: Detected lcore 97 as core 14 on socket 1 00:10:19.561 EAL: Detected lcore 98 as core 16 on socket 1 00:10:19.561 EAL: Detected lcore 99 as core 17 on socket 1 00:10:19.561 EAL: Detected lcore 100 as core 18 on socket 1 00:10:19.561 EAL: Detected lcore 101 as core 19 on socket 1 00:10:19.561 EAL: Detected lcore 102 as core 20 on socket 1 00:10:19.561 EAL: Detected lcore 103 as core 21 on socket 1 00:10:19.561 EAL: Detected lcore 104 as core 22 on socket 1 00:10:19.561 EAL: Detected lcore 105 as core 24 on socket 1 00:10:19.561 EAL: Detected lcore 106 as core 25 on socket 1 00:10:19.561 EAL: Detected lcore 107 as core 26 on socket 1 00:10:19.561 EAL: Detected lcore 108 as core 27 on socket 1 00:10:19.561 EAL: Detected lcore 109 as core 28 on socket 1 00:10:19.561 EAL: Detected lcore 110 as core 29 on socket 1 00:10:19.561 EAL: Detected lcore 111 as core 30 on socket 1 00:10:19.561 EAL: Maximum logical cores by configuration: 128 00:10:19.561 EAL: Detected CPU lcores: 112 00:10:19.561 EAL: Detected NUMA nodes: 2 00:10:19.561 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:19.561 EAL: Detected shared linkage of DPDK 00:10:19.561 EAL: No shared files mode enabled, IPC will be disabled 00:10:19.878 EAL: Bus pci wants IOVA as 'DC' 00:10:19.878 EAL: Buses did not request a specific IOVA mode. 00:10:19.878 EAL: IOMMU is available, selecting IOVA as VA mode. 00:10:19.878 EAL: Selected IOVA mode 'VA' 00:10:19.878 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.878 EAL: Probing VFIO support... 00:10:19.878 EAL: IOMMU type 1 (Type 1) is supported 00:10:19.878 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:19.878 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:19.878 EAL: VFIO support initialized 00:10:19.878 EAL: Ask a virtual area of 0x2e000 bytes 00:10:19.878 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:19.878 EAL: Setting up physically contiguous memory... 00:10:19.878 EAL: Setting maximum number of open files to 524288 00:10:19.878 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:19.878 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:10:19.878 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:19.878 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.878 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:19.878 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:19.879 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:10:19.879 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.879 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:10:19.879 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:19.879 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.879 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:10:19.879 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:10:19.879 EAL: Hugepages will be freed exactly as allocated. 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: TSC frequency is ~2500000 KHz 00:10:19.879 EAL: Main lcore 0 is ready (tid=7f9987d6fa00;cpuset=[0]) 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 0 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 2MB 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:19.879 EAL: Mem event callback 'spdk:(nil)' registered 00:10:19.879 00:10:19.879 00:10:19.879 CUnit - A unit testing framework for C - Version 2.1-3 00:10:19.879 http://cunit.sourceforge.net/ 00:10:19.879 00:10:19.879 00:10:19.879 Suite: components_suite 00:10:19.879 Test: vtophys_malloc_test ...passed 00:10:19.879 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 4MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 4MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 6MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 6MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 10MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 10MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 18MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 18MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 34MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 34MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 66MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 66MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 130MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was shrunk by 130MB 00:10:19.879 EAL: Trying to obtain current memory policy. 00:10:19.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.879 EAL: Restoring previous memory policy: 4 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.879 EAL: request: mp_malloc_sync 00:10:19.879 EAL: No shared files mode enabled, IPC is disabled 00:10:19.879 EAL: Heap on socket 0 was expanded by 258MB 00:10:19.879 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.157 EAL: request: mp_malloc_sync 00:10:20.157 EAL: No shared files mode enabled, IPC is disabled 00:10:20.157 EAL: Heap on socket 0 was shrunk by 258MB 00:10:20.157 EAL: Trying to obtain current memory policy. 00:10:20.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.157 EAL: Restoring previous memory policy: 4 00:10:20.157 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.157 EAL: request: mp_malloc_sync 00:10:20.157 EAL: No shared files mode enabled, IPC is disabled 00:10:20.157 EAL: Heap on socket 0 was expanded by 514MB 00:10:20.157 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.416 EAL: request: mp_malloc_sync 00:10:20.416 EAL: No shared files mode enabled, IPC is disabled 00:10:20.416 EAL: Heap on socket 0 was shrunk by 514MB 00:10:20.416 EAL: Trying to obtain current memory policy. 00:10:20.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.416 EAL: Restoring previous memory policy: 4 00:10:20.416 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.416 EAL: request: mp_malloc_sync 00:10:20.416 EAL: No shared files mode enabled, IPC is disabled 00:10:20.416 EAL: Heap on socket 0 was expanded by 1026MB 00:10:20.675 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.934 EAL: request: mp_malloc_sync 00:10:20.934 EAL: No shared files mode enabled, IPC is disabled 00:10:20.934 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:20.934 passed 00:10:20.934 00:10:20.934 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.934 suites 1 1 n/a 0 0 00:10:20.934 tests 2 2 2 0 0 00:10:20.934 asserts 497 497 497 0 n/a 00:10:20.934 00:10:20.934 Elapsed time = 1.016 seconds 00:10:20.934 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.934 EAL: request: mp_malloc_sync 00:10:20.934 EAL: No shared files mode enabled, IPC is disabled 00:10:20.934 EAL: Heap on socket 0 was shrunk by 2MB 00:10:20.934 EAL: No shared files mode enabled, IPC is disabled 00:10:20.934 EAL: No shared files mode enabled, IPC is disabled 00:10:20.934 EAL: No shared files mode enabled, IPC is disabled 00:10:20.934 00:10:20.934 real 0m1.200s 00:10:20.934 user 0m0.667s 00:10:20.934 sys 0m0.501s 00:10:20.934 13:38:35 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:20.934 13:38:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:20.934 ************************************ 00:10:20.934 END TEST env_vtophys 00:10:20.934 ************************************ 00:10:20.934 13:38:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:10:20.934 13:38:35 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:20.934 13:38:35 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:20.934 13:38:35 env -- common/autotest_common.sh@10 -- # set +x 00:10:20.934 ************************************ 00:10:20.934 START TEST env_pci 00:10:20.934 ************************************ 00:10:20.934 13:38:35 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:10:20.934 00:10:20.934 00:10:20.934 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.934 http://cunit.sourceforge.net/ 00:10:20.934 00:10:20.934 00:10:20.934 Suite: pci 00:10:20.934 Test: pci_hook ...[2024-06-10 13:38:35.282189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1216003 has claimed it 00:10:20.934 EAL: Cannot find device (10000:00:01.0) 00:10:20.934 EAL: Failed to attach device on primary process 00:10:20.934 passed 00:10:20.934 00:10:20.934 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.934 suites 1 1 n/a 0 0 00:10:20.934 tests 1 1 1 0 0 00:10:20.934 asserts 25 25 25 0 n/a 00:10:20.934 00:10:20.934 Elapsed time = 0.048 seconds 00:10:20.934 00:10:20.934 real 0m0.071s 00:10:20.934 user 0m0.023s 00:10:20.934 sys 0m0.048s 00:10:20.934 13:38:35 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:20.934 13:38:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:20.934 ************************************ 00:10:20.934 END TEST env_pci 00:10:20.934 ************************************ 00:10:20.934 13:38:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:20.934 13:38:35 env -- env/env.sh@15 -- # uname 00:10:20.934 13:38:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:20.934 13:38:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:20.934 13:38:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:20.934 13:38:35 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:10:20.934 13:38:35 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:20.934 13:38:35 env -- common/autotest_common.sh@10 -- # set +x 00:10:21.193 ************************************ 00:10:21.193 START TEST env_dpdk_post_init 00:10:21.193 ************************************ 00:10:21.193 13:38:35 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:21.193 EAL: Detected CPU lcores: 112 00:10:21.193 EAL: Detected NUMA nodes: 2 00:10:21.193 EAL: Detected shared linkage of DPDK 00:10:21.193 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:21.193 EAL: Selected IOVA mode 'VA' 00:10:21.193 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.193 EAL: VFIO support initialized 00:10:21.193 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:21.193 EAL: Using IOMMU type 1 (Type 1) 00:10:21.193 EAL: Ignore mapping IO port bar(1) 00:10:21.193 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:10:21.193 EAL: Ignore mapping IO port bar(1) 00:10:21.193 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:10:21.193 EAL: Ignore mapping IO port bar(1) 00:10:21.193 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:10:21.452 EAL: Ignore mapping IO port bar(1) 00:10:21.452 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:10:21.452 EAL: Ignore mapping IO port bar(1) 00:10:21.452 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:10:21.452 EAL: Ignore mapping IO port bar(1) 00:10:21.452 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:10:21.452 EAL: Ignore mapping IO port bar(1) 00:10:21.452 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:10:21.452 EAL: Ignore mapping IO port bar(1) 00:10:21.452 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:10:21.452 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:10:21.453 EAL: Ignore mapping IO port bar(1) 00:10:21.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:10:22.390 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:10:25.681 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:10:25.681 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:10:26.250 Starting DPDK initialization... 00:10:26.250 Starting SPDK post initialization... 00:10:26.250 SPDK NVMe probe 00:10:26.250 Attaching to 0000:d8:00.0 00:10:26.250 Attached to 0000:d8:00.0 00:10:26.250 Cleaning up... 00:10:26.250 00:10:26.250 real 0m5.050s 00:10:26.250 user 0m3.660s 00:10:26.250 sys 0m0.437s 00:10:26.250 13:38:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:26.250 13:38:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:26.250 ************************************ 00:10:26.250 END TEST env_dpdk_post_init 00:10:26.250 ************************************ 00:10:26.250 13:38:40 env -- env/env.sh@26 -- # uname 00:10:26.250 13:38:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:26.250 13:38:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:26.250 13:38:40 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:26.250 13:38:40 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:26.250 13:38:40 env -- common/autotest_common.sh@10 -- # set +x 00:10:26.250 ************************************ 00:10:26.250 START TEST env_mem_callbacks 00:10:26.250 ************************************ 00:10:26.250 13:38:40 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:26.250 EAL: Detected CPU lcores: 112 00:10:26.250 EAL: Detected NUMA nodes: 2 00:10:26.250 EAL: Detected shared linkage of DPDK 00:10:26.250 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:26.250 EAL: Selected IOVA mode 'VA' 00:10:26.250 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.250 EAL: VFIO support initialized 00:10:26.250 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:26.250 00:10:26.250 00:10:26.250 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.250 http://cunit.sourceforge.net/ 00:10:26.250 00:10:26.250 00:10:26.250 Suite: memory 00:10:26.250 Test: test ... 00:10:26.250 register 0x200000200000 2097152 00:10:26.250 malloc 3145728 00:10:26.250 register 0x200000400000 4194304 00:10:26.250 buf 0x200000500000 len 3145728 PASSED 00:10:26.250 malloc 64 00:10:26.250 buf 0x2000004fff40 len 64 PASSED 00:10:26.250 malloc 4194304 00:10:26.250 register 0x200000800000 6291456 00:10:26.250 buf 0x200000a00000 len 4194304 PASSED 00:10:26.250 free 0x200000500000 3145728 00:10:26.250 free 0x2000004fff40 64 00:10:26.250 unregister 0x200000400000 4194304 PASSED 00:10:26.250 free 0x200000a00000 4194304 00:10:26.250 unregister 0x200000800000 6291456 PASSED 00:10:26.250 malloc 8388608 00:10:26.250 register 0x200000400000 10485760 00:10:26.250 buf 0x200000600000 len 8388608 PASSED 00:10:26.250 free 0x200000600000 8388608 00:10:26.250 unregister 0x200000400000 10485760 PASSED 00:10:26.250 passed 00:10:26.250 00:10:26.250 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.250 suites 1 1 n/a 0 0 00:10:26.250 tests 1 1 1 0 0 00:10:26.250 asserts 15 15 15 0 n/a 00:10:26.250 00:10:26.250 Elapsed time = 0.008 seconds 00:10:26.250 00:10:26.250 real 0m0.088s 00:10:26.250 user 0m0.027s 00:10:26.250 sys 0m0.061s 00:10:26.250 13:38:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:26.250 13:38:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:26.250 ************************************ 00:10:26.250 END TEST env_mem_callbacks 00:10:26.250 ************************************ 00:10:26.250 00:10:26.250 real 0m7.152s 00:10:26.250 user 0m4.747s 00:10:26.250 sys 0m1.462s 00:10:26.250 13:38:40 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:26.250 13:38:40 env -- common/autotest_common.sh@10 -- # set +x 00:10:26.250 ************************************ 00:10:26.250 END TEST env 00:10:26.250 ************************************ 00:10:26.510 13:38:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:10:26.510 13:38:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:26.510 13:38:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:26.510 13:38:40 -- common/autotest_common.sh@10 -- # set +x 00:10:26.510 ************************************ 00:10:26.510 START TEST rpc 00:10:26.510 ************************************ 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:10:26.510 * Looking for test storage... 00:10:26.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:26.510 13:38:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1217146 00:10:26.510 13:38:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.510 13:38:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:10:26.510 13:38:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1217146 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@830 -- # '[' -z 1217146 ']' 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:26.510 13:38:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.510 [2024-06-10 13:38:40.958764] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:26.510 [2024-06-10 13:38:40.958817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217146 ] 00:10:26.769 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.769 [2024-06-10 13:38:41.065507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.769 [2024-06-10 13:38:41.148109] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:26.769 [2024-06-10 13:38:41.148156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1217146' to capture a snapshot of events at runtime. 00:10:26.769 [2024-06-10 13:38:41.148170] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.769 [2024-06-10 13:38:41.148182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.769 [2024-06-10 13:38:41.148191] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1217146 for offline analysis/debug. 00:10:26.769 [2024-06-10 13:38:41.148220] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.707 13:38:41 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:27.707 13:38:41 rpc -- common/autotest_common.sh@863 -- # return 0 00:10:27.707 13:38:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:27.707 13:38:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:27.707 13:38:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:27.707 13:38:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:27.707 13:38:41 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:27.707 13:38:41 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:27.707 13:38:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 ************************************ 00:10:27.707 START TEST rpc_integrity 00:10:27.707 ************************************ 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 13:38:41 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:27.707 { 00:10:27.707 "name": "Malloc0", 00:10:27.707 "aliases": [ 00:10:27.707 "85ac8327-6ff4-4625-8900-d608ddb1765a" 00:10:27.707 ], 00:10:27.707 "product_name": "Malloc disk", 00:10:27.707 "block_size": 512, 00:10:27.707 "num_blocks": 16384, 00:10:27.707 "uuid": "85ac8327-6ff4-4625-8900-d608ddb1765a", 00:10:27.707 "assigned_rate_limits": { 00:10:27.707 "rw_ios_per_sec": 0, 00:10:27.707 "rw_mbytes_per_sec": 0, 00:10:27.707 "r_mbytes_per_sec": 0, 00:10:27.707 "w_mbytes_per_sec": 0 00:10:27.707 }, 00:10:27.707 "claimed": false, 00:10:27.707 "zoned": false, 00:10:27.707 "supported_io_types": { 00:10:27.707 "read": true, 00:10:27.707 "write": true, 00:10:27.707 "unmap": true, 00:10:27.707 "write_zeroes": true, 00:10:27.707 "flush": true, 00:10:27.707 "reset": true, 00:10:27.707 "compare": false, 00:10:27.707 "compare_and_write": false, 00:10:27.707 "abort": true, 00:10:27.707 "nvme_admin": false, 00:10:27.707 "nvme_io": false 00:10:27.707 }, 00:10:27.707 "memory_domains": [ 00:10:27.707 { 00:10:27.707 "dma_device_id": "system", 00:10:27.707 "dma_device_type": 1 00:10:27.707 }, 00:10:27.707 { 00:10:27.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.707 "dma_device_type": 2 00:10:27.707 } 00:10:27.707 ], 00:10:27.707 "driver_specific": {} 00:10:27.707 } 00:10:27.707 ]' 00:10:27.707 13:38:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:27.707 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:27.707 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:27.707 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.707 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 [2024-06-10 13:38:42.038403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:27.707 [2024-06-10 13:38:42.038442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.707 [2024-06-10 13:38:42.038462] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ffb090 00:10:27.707 [2024-06-10 13:38:42.038474] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.707 [2024-06-10 13:38:42.039956] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.707 [2024-06-10 13:38:42.039983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:27.707 Passthru0 00:10:27.707 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.707 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:27.707 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.707 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.707 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:27.707 { 00:10:27.707 "name": "Malloc0", 00:10:27.707 "aliases": [ 00:10:27.707 "85ac8327-6ff4-4625-8900-d608ddb1765a" 00:10:27.707 ], 00:10:27.707 "product_name": "Malloc disk", 00:10:27.707 "block_size": 512, 00:10:27.707 "num_blocks": 16384, 00:10:27.707 "uuid": "85ac8327-6ff4-4625-8900-d608ddb1765a", 00:10:27.707 "assigned_rate_limits": { 00:10:27.707 "rw_ios_per_sec": 0, 00:10:27.707 "rw_mbytes_per_sec": 0, 00:10:27.707 "r_mbytes_per_sec": 0, 00:10:27.707 "w_mbytes_per_sec": 0 00:10:27.707 }, 00:10:27.707 "claimed": true, 00:10:27.707 "claim_type": "exclusive_write", 00:10:27.707 "zoned": false, 00:10:27.707 "supported_io_types": { 00:10:27.707 "read": true, 00:10:27.707 "write": true, 00:10:27.707 "unmap": true, 00:10:27.707 "write_zeroes": true, 00:10:27.707 "flush": true, 00:10:27.707 "reset": true, 00:10:27.707 "compare": false, 00:10:27.707 "compare_and_write": false, 00:10:27.707 "abort": true, 00:10:27.707 "nvme_admin": false, 00:10:27.707 "nvme_io": false 00:10:27.707 }, 00:10:27.707 "memory_domains": [ 00:10:27.707 { 00:10:27.707 "dma_device_id": "system", 00:10:27.707 "dma_device_type": 1 00:10:27.707 }, 00:10:27.707 { 00:10:27.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.708 "dma_device_type": 2 00:10:27.708 } 00:10:27.708 ], 00:10:27.708 "driver_specific": {} 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "name": "Passthru0", 00:10:27.708 "aliases": [ 00:10:27.708 "96b3de04-4739-51a9-8dc3-42cb15d1ace1" 00:10:27.708 ], 00:10:27.708 "product_name": "passthru", 00:10:27.708 "block_size": 512, 00:10:27.708 "num_blocks": 16384, 00:10:27.708 "uuid": "96b3de04-4739-51a9-8dc3-42cb15d1ace1", 00:10:27.708 "assigned_rate_limits": { 00:10:27.708 "rw_ios_per_sec": 0, 00:10:27.708 "rw_mbytes_per_sec": 0, 00:10:27.708 "r_mbytes_per_sec": 0, 00:10:27.708 "w_mbytes_per_sec": 0 00:10:27.708 }, 00:10:27.708 "claimed": false, 00:10:27.708 "zoned": false, 00:10:27.708 "supported_io_types": { 00:10:27.708 "read": true, 00:10:27.708 "write": true, 00:10:27.708 "unmap": true, 00:10:27.708 "write_zeroes": true, 00:10:27.708 "flush": true, 00:10:27.708 "reset": true, 00:10:27.708 "compare": false, 00:10:27.708 "compare_and_write": false, 00:10:27.708 "abort": true, 00:10:27.708 "nvme_admin": false, 00:10:27.708 "nvme_io": false 00:10:27.708 }, 00:10:27.708 "memory_domains": [ 00:10:27.708 { 00:10:27.708 "dma_device_id": "system", 00:10:27.708 "dma_device_type": 1 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.708 "dma_device_type": 2 00:10:27.708 } 00:10:27.708 ], 00:10:27.708 "driver_specific": { 00:10:27.708 "passthru": { 00:10:27.708 "name": "Passthru0", 00:10:27.708 "base_bdev_name": "Malloc0" 00:10:27.708 } 00:10:27.708 } 00:10:27.708 } 00:10:27.708 ]' 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:27.708 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:27.966 13:38:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:27.966 00:10:27.966 real 0m0.282s 00:10:27.966 user 0m0.168s 00:10:27.966 sys 0m0.056s 00:10:27.966 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:27.966 13:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.966 ************************************ 00:10:27.966 END TEST rpc_integrity 00:10:27.966 ************************************ 00:10:27.966 13:38:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:27.967 13:38:42 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:27.967 13:38:42 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:27.967 13:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 ************************************ 00:10:27.967 START TEST rpc_plugins 00:10:27.967 ************************************ 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:27.967 { 00:10:27.967 "name": "Malloc1", 00:10:27.967 "aliases": [ 00:10:27.967 "a74743e7-1be3-40cd-b318-e621b41065d6" 00:10:27.967 ], 00:10:27.967 "product_name": "Malloc disk", 00:10:27.967 "block_size": 4096, 00:10:27.967 "num_blocks": 256, 00:10:27.967 "uuid": "a74743e7-1be3-40cd-b318-e621b41065d6", 00:10:27.967 "assigned_rate_limits": { 00:10:27.967 "rw_ios_per_sec": 0, 00:10:27.967 "rw_mbytes_per_sec": 0, 00:10:27.967 "r_mbytes_per_sec": 0, 00:10:27.967 "w_mbytes_per_sec": 0 00:10:27.967 }, 00:10:27.967 "claimed": false, 00:10:27.967 "zoned": false, 00:10:27.967 "supported_io_types": { 00:10:27.967 "read": true, 00:10:27.967 "write": true, 00:10:27.967 "unmap": true, 00:10:27.967 "write_zeroes": true, 00:10:27.967 "flush": true, 00:10:27.967 "reset": true, 00:10:27.967 "compare": false, 00:10:27.967 "compare_and_write": false, 00:10:27.967 "abort": true, 00:10:27.967 "nvme_admin": false, 00:10:27.967 "nvme_io": false 00:10:27.967 }, 00:10:27.967 "memory_domains": [ 00:10:27.967 { 00:10:27.967 "dma_device_id": "system", 00:10:27.967 "dma_device_type": 1 00:10:27.967 }, 00:10:27.967 { 00:10:27.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.967 "dma_device_type": 2 00:10:27.967 } 00:10:27.967 ], 00:10:27.967 "driver_specific": {} 00:10:27.967 } 00:10:27.967 ]' 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:27.967 13:38:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:27.967 00:10:27.967 real 0m0.139s 00:10:27.967 user 0m0.091s 00:10:27.967 sys 0m0.022s 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:27.967 13:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 ************************************ 00:10:27.967 END TEST rpc_plugins 00:10:27.967 ************************************ 00:10:28.226 13:38:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:28.226 13:38:42 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:28.226 13:38:42 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:28.226 13:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 ************************************ 00:10:28.226 START TEST rpc_trace_cmd_test 00:10:28.226 ************************************ 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:28.226 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1217146", 00:10:28.226 "tpoint_group_mask": "0x8", 00:10:28.226 "iscsi_conn": { 00:10:28.226 "mask": "0x2", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "scsi": { 00:10:28.226 "mask": "0x4", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "bdev": { 00:10:28.226 "mask": "0x8", 00:10:28.226 "tpoint_mask": "0xffffffffffffffff" 00:10:28.226 }, 00:10:28.226 "nvmf_rdma": { 00:10:28.226 "mask": "0x10", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "nvmf_tcp": { 00:10:28.226 "mask": "0x20", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "ftl": { 00:10:28.226 "mask": "0x40", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "blobfs": { 00:10:28.226 "mask": "0x80", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "dsa": { 00:10:28.226 "mask": "0x200", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "thread": { 00:10:28.226 "mask": "0x400", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "nvme_pcie": { 00:10:28.226 "mask": "0x800", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "iaa": { 00:10:28.226 "mask": "0x1000", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "nvme_tcp": { 00:10:28.226 "mask": "0x2000", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "bdev_nvme": { 00:10:28.226 "mask": "0x4000", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 }, 00:10:28.226 "sock": { 00:10:28.226 "mask": "0x8000", 00:10:28.226 "tpoint_mask": "0x0" 00:10:28.226 } 00:10:28.226 }' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:28.226 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:28.486 13:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:28.486 00:10:28.486 real 0m0.242s 00:10:28.486 user 0m0.200s 00:10:28.486 sys 0m0.035s 00:10:28.486 13:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:28.486 13:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 ************************************ 00:10:28.486 END TEST rpc_trace_cmd_test 00:10:28.486 ************************************ 00:10:28.486 13:38:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:28.486 13:38:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:28.486 13:38:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:28.486 13:38:42 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:28.486 13:38:42 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:28.486 13:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 ************************************ 00:10:28.486 START TEST rpc_daemon_integrity 00:10:28.486 ************************************ 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:28.486 { 00:10:28.486 "name": "Malloc2", 00:10:28.486 "aliases": [ 00:10:28.486 "81c2332d-e4ce-432e-84e6-ea867cdcb23b" 00:10:28.486 ], 00:10:28.486 "product_name": "Malloc disk", 00:10:28.486 "block_size": 512, 00:10:28.486 "num_blocks": 16384, 00:10:28.486 "uuid": "81c2332d-e4ce-432e-84e6-ea867cdcb23b", 00:10:28.486 "assigned_rate_limits": { 00:10:28.486 "rw_ios_per_sec": 0, 00:10:28.486 "rw_mbytes_per_sec": 0, 00:10:28.486 "r_mbytes_per_sec": 0, 00:10:28.486 "w_mbytes_per_sec": 0 00:10:28.486 }, 00:10:28.486 "claimed": false, 00:10:28.486 "zoned": false, 00:10:28.486 "supported_io_types": { 00:10:28.486 "read": true, 00:10:28.486 "write": true, 00:10:28.486 "unmap": true, 00:10:28.486 "write_zeroes": true, 00:10:28.486 "flush": true, 00:10:28.486 "reset": true, 00:10:28.486 "compare": false, 00:10:28.486 "compare_and_write": false, 00:10:28.486 "abort": true, 00:10:28.486 "nvme_admin": false, 00:10:28.486 "nvme_io": false 00:10:28.486 }, 00:10:28.486 "memory_domains": [ 00:10:28.486 { 00:10:28.486 "dma_device_id": "system", 00:10:28.486 "dma_device_type": 1 00:10:28.486 }, 00:10:28.486 { 00:10:28.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.486 "dma_device_type": 2 00:10:28.486 } 00:10:28.486 ], 00:10:28.486 "driver_specific": {} 00:10:28.486 } 00:10:28.486 ]' 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 [2024-06-10 13:38:42.944942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:28.486 [2024-06-10 13:38:42.944979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.486 [2024-06-10 13:38:42.944996] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ffc590 00:10:28.486 [2024-06-10 13:38:42.945007] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.486 [2024-06-10 13:38:42.946267] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.486 [2024-06-10 13:38:42.946292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:28.486 Passthru0 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.486 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.746 13:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.746 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:28.746 { 00:10:28.746 "name": "Malloc2", 00:10:28.746 "aliases": [ 00:10:28.746 "81c2332d-e4ce-432e-84e6-ea867cdcb23b" 00:10:28.746 ], 00:10:28.746 "product_name": "Malloc disk", 00:10:28.746 "block_size": 512, 00:10:28.746 "num_blocks": 16384, 00:10:28.746 "uuid": "81c2332d-e4ce-432e-84e6-ea867cdcb23b", 00:10:28.746 "assigned_rate_limits": { 00:10:28.746 "rw_ios_per_sec": 0, 00:10:28.746 "rw_mbytes_per_sec": 0, 00:10:28.746 "r_mbytes_per_sec": 0, 00:10:28.746 "w_mbytes_per_sec": 0 00:10:28.746 }, 00:10:28.746 "claimed": true, 00:10:28.746 "claim_type": "exclusive_write", 00:10:28.746 "zoned": false, 00:10:28.746 "supported_io_types": { 00:10:28.746 "read": true, 00:10:28.746 "write": true, 00:10:28.746 "unmap": true, 00:10:28.746 "write_zeroes": true, 00:10:28.746 "flush": true, 00:10:28.746 "reset": true, 00:10:28.746 "compare": false, 00:10:28.746 "compare_and_write": false, 00:10:28.746 "abort": true, 00:10:28.746 "nvme_admin": false, 00:10:28.746 "nvme_io": false 00:10:28.746 }, 00:10:28.746 "memory_domains": [ 00:10:28.746 { 00:10:28.746 "dma_device_id": "system", 00:10:28.746 "dma_device_type": 1 00:10:28.746 }, 00:10:28.746 { 00:10:28.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.746 "dma_device_type": 2 00:10:28.746 } 00:10:28.746 ], 00:10:28.746 "driver_specific": {} 00:10:28.746 }, 00:10:28.746 { 00:10:28.746 "name": "Passthru0", 00:10:28.746 "aliases": [ 00:10:28.746 "28bf6538-a4aa-5cf8-9090-d6277d6c55e2" 00:10:28.746 ], 00:10:28.746 "product_name": "passthru", 00:10:28.746 "block_size": 512, 00:10:28.746 "num_blocks": 16384, 00:10:28.746 "uuid": "28bf6538-a4aa-5cf8-9090-d6277d6c55e2", 00:10:28.746 "assigned_rate_limits": { 00:10:28.746 "rw_ios_per_sec": 0, 00:10:28.746 "rw_mbytes_per_sec": 0, 00:10:28.746 "r_mbytes_per_sec": 0, 00:10:28.746 "w_mbytes_per_sec": 0 00:10:28.746 }, 00:10:28.746 "claimed": false, 00:10:28.746 "zoned": false, 00:10:28.746 "supported_io_types": { 00:10:28.746 "read": true, 00:10:28.746 "write": true, 00:10:28.746 "unmap": true, 00:10:28.746 "write_zeroes": true, 00:10:28.746 "flush": true, 00:10:28.746 "reset": true, 00:10:28.746 "compare": false, 00:10:28.746 "compare_and_write": false, 00:10:28.746 "abort": true, 00:10:28.746 "nvme_admin": false, 00:10:28.746 "nvme_io": false 00:10:28.746 }, 00:10:28.746 "memory_domains": [ 00:10:28.746 { 00:10:28.746 "dma_device_id": "system", 00:10:28.746 "dma_device_type": 1 00:10:28.746 }, 00:10:28.746 { 00:10:28.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.746 "dma_device_type": 2 00:10:28.746 } 00:10:28.746 ], 00:10:28.746 "driver_specific": { 00:10:28.746 "passthru": { 00:10:28.746 "name": "Passthru0", 00:10:28.746 "base_bdev_name": "Malloc2" 00:10:28.746 } 00:10:28.746 } 00:10:28.746 } 00:10:28.746 ]' 00:10:28.746 13:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:28.746 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:28.746 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:28.746 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.746 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:28.747 00:10:28.747 real 0m0.289s 00:10:28.747 user 0m0.176s 00:10:28.747 sys 0m0.055s 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:28.747 13:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 ************************************ 00:10:28.747 END TEST rpc_daemon_integrity 00:10:28.747 ************************************ 00:10:28.747 13:38:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:28.747 13:38:43 rpc -- rpc/rpc.sh@84 -- # killprocess 1217146 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@949 -- # '[' -z 1217146 ']' 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@953 -- # kill -0 1217146 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@954 -- # uname 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1217146 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1217146' 00:10:28.747 killing process with pid 1217146 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@968 -- # kill 1217146 00:10:28.747 13:38:43 rpc -- common/autotest_common.sh@973 -- # wait 1217146 00:10:29.316 00:10:29.316 real 0m2.734s 00:10:29.316 user 0m3.480s 00:10:29.316 sys 0m0.900s 00:10:29.316 13:38:43 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:29.316 13:38:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.316 ************************************ 00:10:29.316 END TEST rpc 00:10:29.316 ************************************ 00:10:29.316 13:38:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:29.316 13:38:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:29.316 13:38:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:29.316 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:10:29.316 ************************************ 00:10:29.316 START TEST skip_rpc 00:10:29.316 ************************************ 00:10:29.316 13:38:43 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:29.316 * Looking for test storage... 00:10:29.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:29.316 13:38:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:29.316 13:38:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:29.316 13:38:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:29.316 13:38:43 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:29.316 13:38:43 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:29.316 13:38:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.316 ************************************ 00:10:29.316 START TEST skip_rpc 00:10:29.316 ************************************ 00:10:29.316 13:38:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:10:29.316 13:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1217730 00:10:29.316 13:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:29.316 13:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:29.316 13:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:29.575 [2024-06-10 13:38:43.815019] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:29.575 [2024-06-10 13:38:43.815077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217730 ] 00:10:29.576 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.576 [2024-06-10 13:38:43.936437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.576 [2024-06-10 13:38:44.019184] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.852 13:38:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1217730 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1217730 ']' 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1217730 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1217730 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1217730' 00:10:34.853 killing process with pid 1217730 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1217730 00:10:34.853 13:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1217730 00:10:34.853 00:10:34.853 real 0m5.403s 00:10:34.853 user 0m5.098s 00:10:34.853 sys 0m0.346s 00:10:34.853 13:38:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:34.853 13:38:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 ************************************ 00:10:34.853 END TEST skip_rpc 00:10:34.853 ************************************ 00:10:34.853 13:38:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:34.853 13:38:49 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:34.853 13:38:49 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:34.853 13:38:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 ************************************ 00:10:34.853 START TEST skip_rpc_with_json 00:10:34.853 ************************************ 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1218719 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1218719 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1218719 ']' 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:34.853 13:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 [2024-06-10 13:38:49.306337] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:34.853 [2024-06-10 13:38:49.306394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218719 ] 00:10:35.112 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.112 [2024-06-10 13:38:49.427606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.112 [2024-06-10 13:38:49.512199] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:36.051 [2024-06-10 13:38:50.209445] nvmf_rpc.c:2560:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:36.051 request: 00:10:36.051 { 00:10:36.051 "trtype": "tcp", 00:10:36.051 "method": "nvmf_get_transports", 00:10:36.051 "req_id": 1 00:10:36.051 } 00:10:36.051 Got JSON-RPC error response 00:10:36.051 response: 00:10:36.051 { 00:10:36.051 "code": -19, 00:10:36.051 "message": "No such device" 00:10:36.051 } 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:36.051 [2024-06-10 13:38:50.221570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.051 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:36.052 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:36.052 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:36.052 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:36.052 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:36.052 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:36.052 { 00:10:36.052 "subsystems": [ 00:10:36.052 { 00:10:36.052 "subsystem": "vfio_user_target", 00:10:36.052 "config": null 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "keyring", 00:10:36.052 "config": [] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "iobuf", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "iobuf_set_options", 00:10:36.052 "params": { 00:10:36.052 "small_pool_count": 8192, 00:10:36.052 "large_pool_count": 1024, 00:10:36.052 "small_bufsize": 8192, 00:10:36.052 "large_bufsize": 135168 00:10:36.052 } 00:10:36.052 } 00:10:36.052 ] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "sock", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "sock_set_default_impl", 00:10:36.052 "params": { 00:10:36.052 "impl_name": "posix" 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "sock_impl_set_options", 00:10:36.052 "params": { 00:10:36.052 "impl_name": "ssl", 00:10:36.052 "recv_buf_size": 4096, 00:10:36.052 "send_buf_size": 4096, 00:10:36.052 "enable_recv_pipe": true, 00:10:36.052 "enable_quickack": false, 00:10:36.052 "enable_placement_id": 0, 00:10:36.052 "enable_zerocopy_send_server": true, 00:10:36.052 "enable_zerocopy_send_client": false, 00:10:36.052 "zerocopy_threshold": 0, 00:10:36.052 "tls_version": 0, 00:10:36.052 "enable_ktls": false, 00:10:36.052 "enable_new_session_tickets": true 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "sock_impl_set_options", 00:10:36.052 "params": { 00:10:36.052 "impl_name": "posix", 00:10:36.052 "recv_buf_size": 2097152, 00:10:36.052 "send_buf_size": 2097152, 00:10:36.052 "enable_recv_pipe": true, 00:10:36.052 "enable_quickack": false, 00:10:36.052 "enable_placement_id": 0, 00:10:36.052 "enable_zerocopy_send_server": true, 00:10:36.052 "enable_zerocopy_send_client": false, 00:10:36.052 "zerocopy_threshold": 0, 00:10:36.052 "tls_version": 0, 00:10:36.052 "enable_ktls": false, 00:10:36.052 "enable_new_session_tickets": false 00:10:36.052 } 00:10:36.052 } 00:10:36.052 ] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "vmd", 00:10:36.052 "config": [] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "accel", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "accel_set_options", 00:10:36.052 "params": { 00:10:36.052 "small_cache_size": 128, 00:10:36.052 "large_cache_size": 16, 00:10:36.052 "task_count": 2048, 00:10:36.052 "sequence_count": 2048, 00:10:36.052 "buf_count": 2048 00:10:36.052 } 00:10:36.052 } 00:10:36.052 ] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "bdev", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "bdev_set_options", 00:10:36.052 "params": { 00:10:36.052 "bdev_io_pool_size": 65535, 00:10:36.052 "bdev_io_cache_size": 256, 00:10:36.052 "bdev_auto_examine": true, 00:10:36.052 "iobuf_small_cache_size": 128, 00:10:36.052 "iobuf_large_cache_size": 16 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "bdev_raid_set_options", 00:10:36.052 "params": { 00:10:36.052 "process_window_size_kb": 1024 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "bdev_iscsi_set_options", 00:10:36.052 "params": { 00:10:36.052 "timeout_sec": 30 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "bdev_nvme_set_options", 00:10:36.052 "params": { 00:10:36.052 "action_on_timeout": "none", 00:10:36.052 "timeout_us": 0, 00:10:36.052 "timeout_admin_us": 0, 00:10:36.052 "keep_alive_timeout_ms": 10000, 00:10:36.052 "arbitration_burst": 0, 00:10:36.052 "low_priority_weight": 0, 00:10:36.052 "medium_priority_weight": 0, 00:10:36.052 "high_priority_weight": 0, 00:10:36.052 "nvme_adminq_poll_period_us": 10000, 00:10:36.052 "nvme_ioq_poll_period_us": 0, 00:10:36.052 "io_queue_requests": 0, 00:10:36.052 "delay_cmd_submit": true, 00:10:36.052 "transport_retry_count": 4, 00:10:36.052 "bdev_retry_count": 3, 00:10:36.052 "transport_ack_timeout": 0, 00:10:36.052 "ctrlr_loss_timeout_sec": 0, 00:10:36.052 "reconnect_delay_sec": 0, 00:10:36.052 "fast_io_fail_timeout_sec": 0, 00:10:36.052 "disable_auto_failback": false, 00:10:36.052 "generate_uuids": false, 00:10:36.052 "transport_tos": 0, 00:10:36.052 "nvme_error_stat": false, 00:10:36.052 "rdma_srq_size": 0, 00:10:36.052 "io_path_stat": false, 00:10:36.052 "allow_accel_sequence": false, 00:10:36.052 "rdma_max_cq_size": 0, 00:10:36.052 "rdma_cm_event_timeout_ms": 0, 00:10:36.052 "dhchap_digests": [ 00:10:36.052 "sha256", 00:10:36.052 "sha384", 00:10:36.052 "sha512" 00:10:36.052 ], 00:10:36.052 "dhchap_dhgroups": [ 00:10:36.052 "null", 00:10:36.052 "ffdhe2048", 00:10:36.052 "ffdhe3072", 00:10:36.052 "ffdhe4096", 00:10:36.052 "ffdhe6144", 00:10:36.052 "ffdhe8192" 00:10:36.052 ] 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "bdev_nvme_set_hotplug", 00:10:36.052 "params": { 00:10:36.052 "period_us": 100000, 00:10:36.052 "enable": false 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "bdev_wait_for_examine" 00:10:36.052 } 00:10:36.052 ] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "scsi", 00:10:36.052 "config": null 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "scheduler", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "framework_set_scheduler", 00:10:36.052 "params": { 00:10:36.052 "name": "static" 00:10:36.052 } 00:10:36.052 } 00:10:36.052 ] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "vhost_scsi", 00:10:36.052 "config": [] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "vhost_blk", 00:10:36.052 "config": [] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "ublk", 00:10:36.052 "config": [] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "nbd", 00:10:36.052 "config": [] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "nvmf", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "nvmf_set_config", 00:10:36.052 "params": { 00:10:36.052 "discovery_filter": "match_any", 00:10:36.052 "admin_cmd_passthru": { 00:10:36.052 "identify_ctrlr": false 00:10:36.052 } 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "nvmf_set_max_subsystems", 00:10:36.052 "params": { 00:10:36.052 "max_subsystems": 1024 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "nvmf_set_crdt", 00:10:36.052 "params": { 00:10:36.052 "crdt1": 0, 00:10:36.052 "crdt2": 0, 00:10:36.052 "crdt3": 0 00:10:36.052 } 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "method": "nvmf_create_transport", 00:10:36.052 "params": { 00:10:36.052 "trtype": "TCP", 00:10:36.052 "max_queue_depth": 128, 00:10:36.052 "max_io_qpairs_per_ctrlr": 127, 00:10:36.052 "in_capsule_data_size": 4096, 00:10:36.052 "max_io_size": 131072, 00:10:36.052 "io_unit_size": 131072, 00:10:36.052 "max_aq_depth": 128, 00:10:36.052 "num_shared_buffers": 511, 00:10:36.052 "buf_cache_size": 4294967295, 00:10:36.052 "dif_insert_or_strip": false, 00:10:36.052 "zcopy": false, 00:10:36.052 "c2h_success": true, 00:10:36.052 "sock_priority": 0, 00:10:36.052 "abort_timeout_sec": 1, 00:10:36.052 "ack_timeout": 0, 00:10:36.052 "data_wr_pool_size": 0 00:10:36.052 } 00:10:36.052 } 00:10:36.052 ] 00:10:36.052 }, 00:10:36.052 { 00:10:36.052 "subsystem": "iscsi", 00:10:36.052 "config": [ 00:10:36.052 { 00:10:36.052 "method": "iscsi_set_options", 00:10:36.052 "params": { 00:10:36.052 "node_base": "iqn.2016-06.io.spdk", 00:10:36.052 "max_sessions": 128, 00:10:36.052 "max_connections_per_session": 2, 00:10:36.052 "max_queue_depth": 64, 00:10:36.052 "default_time2wait": 2, 00:10:36.053 "default_time2retain": 20, 00:10:36.053 "first_burst_length": 8192, 00:10:36.053 "immediate_data": true, 00:10:36.053 "allow_duplicated_isid": false, 00:10:36.053 "error_recovery_level": 0, 00:10:36.053 "nop_timeout": 60, 00:10:36.053 "nop_in_interval": 30, 00:10:36.053 "disable_chap": false, 00:10:36.053 "require_chap": false, 00:10:36.053 "mutual_chap": false, 00:10:36.053 "chap_group": 0, 00:10:36.053 "max_large_datain_per_connection": 64, 00:10:36.053 "max_r2t_per_connection": 4, 00:10:36.053 "pdu_pool_size": 36864, 00:10:36.053 "immediate_data_pool_size": 16384, 00:10:36.053 "data_out_pool_size": 2048 00:10:36.053 } 00:10:36.053 } 00:10:36.053 ] 00:10:36.053 } 00:10:36.053 ] 00:10:36.053 } 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1218719 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1218719 ']' 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1218719 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1218719 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1218719' 00:10:36.053 killing process with pid 1218719 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1218719 00:10:36.053 13:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1218719 00:10:36.622 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1218998 00:10:36.622 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:36.622 13:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1218998 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1218998 ']' 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1218998 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1218998 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1218998' 00:10:41.898 killing process with pid 1218998 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1218998 00:10:41.898 13:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1218998 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:41.898 00:10:41.898 real 0m6.944s 00:10:41.898 user 0m6.739s 00:10:41.898 sys 0m0.771s 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:41.898 ************************************ 00:10:41.898 END TEST skip_rpc_with_json 00:10:41.898 ************************************ 00:10:41.898 13:38:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:41.898 13:38:56 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:41.898 13:38:56 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:41.898 13:38:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.898 ************************************ 00:10:41.898 START TEST skip_rpc_with_delay 00:10:41.898 ************************************ 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:41.898 [2024-06-10 13:38:56.341710] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:41.898 [2024-06-10 13:38:56.341800] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:41.898 00:10:41.898 real 0m0.082s 00:10:41.898 user 0m0.045s 00:10:41.898 sys 0m0.037s 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.898 13:38:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:41.898 ************************************ 00:10:41.898 END TEST skip_rpc_with_delay 00:10:41.898 ************************************ 00:10:42.158 13:38:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:42.158 13:38:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:42.158 13:38:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:42.158 13:38:56 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:42.158 13:38:56 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:42.158 13:38:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 ************************************ 00:10:42.158 START TEST exit_on_failed_rpc_init 00:10:42.158 ************************************ 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1220102 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1220102 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1220102 ']' 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:42.158 13:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 [2024-06-10 13:38:56.508094] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:42.158 [2024-06-10 13:38:56.508155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220102 ] 00:10:42.158 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.158 [2024-06-10 13:38:56.627045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.417 [2024-06-10 13:38:56.714180] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:42.986 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:42.987 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:42.987 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:43.246 [2024-06-10 13:38:57.465165] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:43.246 [2024-06-10 13:38:57.465230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220125 ] 00:10:43.246 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.246 [2024-06-10 13:38:57.575518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.246 [2024-06-10 13:38:57.656710] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.246 [2024-06-10 13:38:57.656787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:43.246 [2024-06-10 13:38:57.656805] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:43.246 [2024-06-10 13:38:57.656816] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1220102 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1220102 ']' 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1220102 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1220102 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1220102' 00:10:43.506 killing process with pid 1220102 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1220102 00:10:43.506 13:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1220102 00:10:43.765 00:10:43.765 real 0m1.682s 00:10:43.765 user 0m1.954s 00:10:43.765 sys 0m0.548s 00:10:43.765 13:38:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:43.765 13:38:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:43.765 ************************************ 00:10:43.766 END TEST exit_on_failed_rpc_init 00:10:43.766 ************************************ 00:10:43.766 13:38:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:43.766 00:10:43.766 real 0m14.568s 00:10:43.766 user 0m14.006s 00:10:43.766 sys 0m2.025s 00:10:43.766 13:38:58 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:43.766 13:38:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.766 ************************************ 00:10:43.766 END TEST skip_rpc 00:10:43.766 ************************************ 00:10:43.766 13:38:58 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:43.766 13:38:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:43.766 13:38:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:43.766 13:38:58 -- common/autotest_common.sh@10 -- # set +x 00:10:44.025 ************************************ 00:10:44.025 START TEST rpc_client 00:10:44.025 ************************************ 00:10:44.025 13:38:58 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:44.025 * Looking for test storage... 00:10:44.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:10:44.025 13:38:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:44.025 OK 00:10:44.025 13:38:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:44.025 00:10:44.025 real 0m0.127s 00:10:44.025 user 0m0.054s 00:10:44.025 sys 0m0.083s 00:10:44.025 13:38:58 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:44.025 13:38:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:44.025 ************************************ 00:10:44.025 END TEST rpc_client 00:10:44.025 ************************************ 00:10:44.025 13:38:58 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:44.025 13:38:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:44.025 13:38:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:44.025 13:38:58 -- common/autotest_common.sh@10 -- # set +x 00:10:44.025 ************************************ 00:10:44.025 START TEST json_config 00:10:44.025 ************************************ 00:10:44.025 13:38:58 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:44.285 13:38:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:10:44.285 13:38:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.286 13:38:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.286 13:38:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.286 13:38:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.286 13:38:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.286 13:38:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.286 13:38:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.286 13:38:58 json_config -- paths/export.sh@5 -- # export PATH 00:10:44.286 13:38:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@47 -- # : 0 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.286 13:38:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:10:44.286 INFO: JSON configuration test init 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:44.286 13:38:58 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:10:44.286 13:38:58 json_config -- json_config/common.sh@9 -- # local app=target 00:10:44.286 13:38:58 json_config -- json_config/common.sh@10 -- # shift 00:10:44.286 13:38:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:44.286 13:38:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:44.286 13:38:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:44.286 13:38:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:44.286 13:38:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:44.286 13:38:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1220495 00:10:44.286 13:38:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:44.286 Waiting for target to run... 00:10:44.286 13:38:58 json_config -- json_config/common.sh@25 -- # waitforlisten 1220495 /var/tmp/spdk_tgt.sock 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@830 -- # '[' -z 1220495 ']' 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:44.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:44.286 13:38:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:44.286 13:38:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:44.286 [2024-06-10 13:38:58.638367] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:44.286 [2024-06-10 13:38:58.638431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220495 ] 00:10:44.286 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.547 [2024-06-10 13:38:58.981568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.806 [2024-06-10 13:38:59.056639] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.065 13:38:59 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:45.065 13:38:59 json_config -- common/autotest_common.sh@863 -- # return 0 00:10:45.065 13:38:59 json_config -- json_config/common.sh@26 -- # echo '' 00:10:45.065 00:10:45.065 13:38:59 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:10:45.065 13:38:59 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:10:45.065 13:38:59 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:45.065 13:38:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:45.065 13:38:59 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:10:45.065 13:38:59 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:10:45.065 13:38:59 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:45.065 13:38:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:45.325 13:38:59 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:45.325 13:38:59 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:10:45.325 13:38:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:48.613 13:39:02 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:48.613 13:39:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:48.613 13:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@48 -- # local get_types 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:10:48.613 13:39:02 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:48.613 13:39:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@55 -- # return 0 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:10:48.613 13:39:02 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:48.613 13:39:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:10:48.613 13:39:02 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:48.613 13:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:48.872 MallocForNvmf0 00:10:48.872 13:39:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:48.872 13:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:49.131 MallocForNvmf1 00:10:49.131 13:39:03 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:10:49.131 13:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:10:49.131 [2024-06-10 13:39:03.591170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.390 13:39:03 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.390 13:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.390 13:39:03 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:49.390 13:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:49.649 13:39:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:49.649 13:39:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:49.908 13:39:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:49.908 13:39:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:50.178 [2024-06-10 13:39:04.458009] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:50.178 13:39:04 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:10:50.178 13:39:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:50.178 13:39:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:50.178 13:39:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:50.178 13:39:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:50.178 13:39:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:50.178 13:39:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:50.178 13:39:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:50.178 13:39:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:50.493 MallocBdevForConfigChangeCheck 00:10:50.493 13:39:04 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:50.493 13:39:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:50.493 13:39:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:50.493 13:39:04 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:50.493 13:39:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:50.770 13:39:05 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:50.770 INFO: shutting down applications... 00:10:50.770 13:39:05 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:50.770 13:39:05 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:50.770 13:39:05 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:50.770 13:39:05 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:53.305 Calling clear_iscsi_subsystem 00:10:53.305 Calling clear_nvmf_subsystem 00:10:53.305 Calling clear_nbd_subsystem 00:10:53.305 Calling clear_ublk_subsystem 00:10:53.305 Calling clear_vhost_blk_subsystem 00:10:53.305 Calling clear_vhost_scsi_subsystem 00:10:53.305 Calling clear_bdev_subsystem 00:10:53.305 13:39:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:10:53.305 13:39:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:53.305 13:39:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:53.305 13:39:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:53.305 13:39:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:53.305 13:39:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:53.565 13:39:07 json_config -- json_config/json_config.sh@345 -- # break 00:10:53.565 13:39:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:53.565 13:39:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:53.565 13:39:07 json_config -- json_config/common.sh@31 -- # local app=target 00:10:53.565 13:39:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:53.565 13:39:07 json_config -- json_config/common.sh@35 -- # [[ -n 1220495 ]] 00:10:53.565 13:39:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1220495 00:10:53.565 13:39:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:53.565 13:39:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:53.565 13:39:07 json_config -- json_config/common.sh@41 -- # kill -0 1220495 00:10:53.565 13:39:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:53.824 13:39:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:53.824 13:39:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:53.824 13:39:08 json_config -- json_config/common.sh@41 -- # kill -0 1220495 00:10:53.824 13:39:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:53.824 13:39:08 json_config -- json_config/common.sh@43 -- # break 00:10:53.824 13:39:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:53.824 13:39:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:53.824 SPDK target shutdown done 00:10:53.824 13:39:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:53.824 INFO: relaunching applications... 00:10:53.824 13:39:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:54.084 13:39:08 json_config -- json_config/common.sh@9 -- # local app=target 00:10:54.084 13:39:08 json_config -- json_config/common.sh@10 -- # shift 00:10:54.084 13:39:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:54.084 13:39:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:54.084 13:39:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:54.084 13:39:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:54.084 13:39:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:54.084 13:39:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1222864 00:10:54.084 13:39:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:54.084 Waiting for target to run... 00:10:54.084 13:39:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1222864 /var/tmp/spdk_tgt.sock 00:10:54.084 13:39:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:54.084 13:39:08 json_config -- common/autotest_common.sh@830 -- # '[' -z 1222864 ']' 00:10:54.084 13:39:08 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:54.084 13:39:08 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:54.084 13:39:08 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:54.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:54.084 13:39:08 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:54.084 13:39:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:54.084 [2024-06-10 13:39:08.357136] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:10:54.084 [2024-06-10 13:39:08.357211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222864 ] 00:10:54.084 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.343 [2024-06-10 13:39:08.707143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.343 [2024-06-10 13:39:08.782075] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.634 [2024-06-10 13:39:11.842910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.634 [2024-06-10 13:39:11.875334] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:57.634 13:39:11 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:57.634 13:39:11 json_config -- common/autotest_common.sh@863 -- # return 0 00:10:57.634 13:39:11 json_config -- json_config/common.sh@26 -- # echo '' 00:10:57.634 00:10:57.634 13:39:11 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:57.634 13:39:11 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:57.634 INFO: Checking if target configuration is the same... 00:10:57.634 13:39:11 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:57.634 13:39:11 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:57.634 13:39:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:57.634 + '[' 2 -ne 2 ']' 00:10:57.634 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:57.634 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:57.635 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.635 +++ basename /dev/fd/62 00:10:57.635 ++ mktemp /tmp/62.XXX 00:10:57.635 + tmp_file_1=/tmp/62.0gx 00:10:57.635 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:57.635 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:57.635 + tmp_file_2=/tmp/spdk_tgt_config.json.FDg 00:10:57.635 + ret=0 00:10:57.635 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:57.903 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:57.903 + diff -u /tmp/62.0gx /tmp/spdk_tgt_config.json.FDg 00:10:57.903 + echo 'INFO: JSON config files are the same' 00:10:57.903 INFO: JSON config files are the same 00:10:57.904 + rm /tmp/62.0gx /tmp/spdk_tgt_config.json.FDg 00:10:57.904 + exit 0 00:10:57.904 13:39:12 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:57.904 13:39:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:57.904 INFO: changing configuration and checking if this can be detected... 00:10:57.904 13:39:12 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:57.904 13:39:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:58.168 13:39:12 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:58.168 13:39:12 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:58.168 13:39:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:58.168 + '[' 2 -ne 2 ']' 00:10:58.168 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:58.168 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:58.168 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:58.168 +++ basename /dev/fd/62 00:10:58.168 ++ mktemp /tmp/62.XXX 00:10:58.168 + tmp_file_1=/tmp/62.915 00:10:58.168 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:58.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:58.168 + tmp_file_2=/tmp/spdk_tgt_config.json.3WA 00:10:58.168 + ret=0 00:10:58.168 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:58.737 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:58.737 + diff -u /tmp/62.915 /tmp/spdk_tgt_config.json.3WA 00:10:58.737 + ret=1 00:10:58.737 + echo '=== Start of file: /tmp/62.915 ===' 00:10:58.737 + cat /tmp/62.915 00:10:58.737 + echo '=== End of file: /tmp/62.915 ===' 00:10:58.737 + echo '' 00:10:58.737 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3WA ===' 00:10:58.737 + cat /tmp/spdk_tgt_config.json.3WA 00:10:58.737 + echo '=== End of file: /tmp/spdk_tgt_config.json.3WA ===' 00:10:58.737 + echo '' 00:10:58.737 + rm /tmp/62.915 /tmp/spdk_tgt_config.json.3WA 00:10:58.737 + exit 1 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:58.737 INFO: configuration change detected. 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@317 -- # [[ -n 1222864 ]] 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:58.737 13:39:13 json_config -- json_config/json_config.sh@323 -- # killprocess 1222864 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@949 -- # '[' -z 1222864 ']' 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@953 -- # kill -0 1222864 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@954 -- # uname 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1222864 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1222864' 00:10:58.737 killing process with pid 1222864 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@968 -- # kill 1222864 00:10:58.737 13:39:13 json_config -- common/autotest_common.sh@973 -- # wait 1222864 00:11:01.270 13:39:15 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:11:01.270 13:39:15 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:11:01.270 13:39:15 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:01.270 13:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.270 13:39:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:11:01.270 13:39:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:11:01.270 INFO: Success 00:11:01.270 00:11:01.270 real 0m16.736s 00:11:01.270 user 0m18.056s 00:11:01.270 sys 0m2.351s 00:11:01.270 13:39:15 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:01.270 13:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.270 ************************************ 00:11:01.270 END TEST json_config 00:11:01.270 ************************************ 00:11:01.270 13:39:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:11:01.270 13:39:15 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:01.270 13:39:15 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:01.270 13:39:15 -- common/autotest_common.sh@10 -- # set +x 00:11:01.270 ************************************ 00:11:01.270 START TEST json_config_extra_key 00:11:01.270 ************************************ 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.270 13:39:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.270 13:39:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.270 13:39:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.270 13:39:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.270 13:39:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.270 13:39:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.270 13:39:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:01.270 13:39:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.270 13:39:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:01.270 INFO: launching applications... 00:11:01.270 13:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1224220 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:01.270 Waiting for target to run... 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1224220 /var/tmp/spdk_tgt.sock 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1224220 ']' 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:01.270 13:39:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:01.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:01.270 13:39:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:01.270 [2024-06-10 13:39:15.459290] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:01.270 [2024-06-10 13:39:15.459356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224220 ] 00:11:01.270 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.530 [2024-06-10 13:39:15.802530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.530 [2024-06-10 13:39:15.881324] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.097 13:39:16 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:02.097 13:39:16 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:02.097 00:11:02.097 13:39:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:02.097 INFO: shutting down applications... 00:11:02.097 13:39:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1224220 ]] 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1224220 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1224220 00:11:02.097 13:39:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1224220 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:02.664 13:39:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:02.664 SPDK target shutdown done 00:11:02.664 13:39:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:02.664 Success 00:11:02.664 00:11:02.664 real 0m1.566s 00:11:02.664 user 0m1.352s 00:11:02.664 sys 0m0.498s 00:11:02.664 13:39:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:02.664 13:39:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 ************************************ 00:11:02.664 END TEST json_config_extra_key 00:11:02.664 ************************************ 00:11:02.664 13:39:16 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:02.664 13:39:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:02.664 13:39:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:02.664 13:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 ************************************ 00:11:02.664 START TEST alias_rpc 00:11:02.664 ************************************ 00:11:02.664 13:39:16 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:02.664 * Looking for test storage... 00:11:02.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:11:02.664 13:39:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:02.664 13:39:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1224539 00:11:02.665 13:39:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1224539 00:11:02.665 13:39:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:11:02.665 13:39:17 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1224539 ']' 00:11:02.665 13:39:17 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.665 13:39:17 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:02.665 13:39:17 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.665 13:39:17 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:02.665 13:39:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.665 [2024-06-10 13:39:17.117044] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:02.665 [2024-06-10 13:39:17.117114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224539 ] 00:11:02.923 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.923 [2024-06-10 13:39:17.237114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.923 [2024-06-10 13:39:17.322634] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.858 13:39:18 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:03.858 13:39:18 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:03.858 13:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:11:03.858 13:39:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1224539 00:11:03.858 13:39:18 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1224539 ']' 00:11:03.858 13:39:18 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1224539 00:11:03.858 13:39:18 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:11:03.858 13:39:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:03.859 13:39:18 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1224539 00:11:03.859 13:39:18 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:03.859 13:39:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:03.859 13:39:18 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1224539' 00:11:03.859 killing process with pid 1224539 00:11:03.859 13:39:18 alias_rpc -- common/autotest_common.sh@968 -- # kill 1224539 00:11:03.859 13:39:18 alias_rpc -- common/autotest_common.sh@973 -- # wait 1224539 00:11:04.426 00:11:04.426 real 0m1.652s 00:11:04.426 user 0m1.786s 00:11:04.426 sys 0m0.518s 00:11:04.426 13:39:18 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:04.426 13:39:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.426 ************************************ 00:11:04.426 END TEST alias_rpc 00:11:04.426 ************************************ 00:11:04.426 13:39:18 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:11:04.426 13:39:18 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:11:04.427 13:39:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:04.427 13:39:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:04.427 13:39:18 -- common/autotest_common.sh@10 -- # set +x 00:11:04.427 ************************************ 00:11:04.427 START TEST spdkcli_tcp 00:11:04.427 ************************************ 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:11:04.427 * Looking for test storage... 00:11:04.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1224862 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:04.427 13:39:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1224862 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1224862 ']' 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:04.427 13:39:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.427 [2024-06-10 13:39:18.845163] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:04.427 [2024-06-10 13:39:18.845225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224862 ] 00:11:04.427 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.686 [2024-06-10 13:39:18.965752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:04.686 [2024-06-10 13:39:19.050476] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.686 [2024-06-10 13:39:19.050481] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.623 13:39:19 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:05.623 13:39:19 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:11:05.623 13:39:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1225125 00:11:05.623 13:39:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:05.623 13:39:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:05.623 [ 00:11:05.623 "bdev_malloc_delete", 00:11:05.623 "bdev_malloc_create", 00:11:05.623 "bdev_null_resize", 00:11:05.623 "bdev_null_delete", 00:11:05.623 "bdev_null_create", 00:11:05.623 "bdev_nvme_cuse_unregister", 00:11:05.623 "bdev_nvme_cuse_register", 00:11:05.623 "bdev_opal_new_user", 00:11:05.623 "bdev_opal_set_lock_state", 00:11:05.623 "bdev_opal_delete", 00:11:05.623 "bdev_opal_get_info", 00:11:05.623 "bdev_opal_create", 00:11:05.623 "bdev_nvme_opal_revert", 00:11:05.623 "bdev_nvme_opal_init", 00:11:05.623 "bdev_nvme_send_cmd", 00:11:05.623 "bdev_nvme_get_path_iostat", 00:11:05.623 "bdev_nvme_get_mdns_discovery_info", 00:11:05.623 "bdev_nvme_stop_mdns_discovery", 00:11:05.623 "bdev_nvme_start_mdns_discovery", 00:11:05.623 "bdev_nvme_set_multipath_policy", 00:11:05.623 "bdev_nvme_set_preferred_path", 00:11:05.623 "bdev_nvme_get_io_paths", 00:11:05.623 "bdev_nvme_remove_error_injection", 00:11:05.623 "bdev_nvme_add_error_injection", 00:11:05.623 "bdev_nvme_get_discovery_info", 00:11:05.623 "bdev_nvme_stop_discovery", 00:11:05.623 "bdev_nvme_start_discovery", 00:11:05.623 "bdev_nvme_get_controller_health_info", 00:11:05.623 "bdev_nvme_disable_controller", 00:11:05.623 "bdev_nvme_enable_controller", 00:11:05.623 "bdev_nvme_reset_controller", 00:11:05.623 "bdev_nvme_get_transport_statistics", 00:11:05.623 "bdev_nvme_apply_firmware", 00:11:05.623 "bdev_nvme_detach_controller", 00:11:05.623 "bdev_nvme_get_controllers", 00:11:05.623 "bdev_nvme_attach_controller", 00:11:05.623 "bdev_nvme_set_hotplug", 00:11:05.623 "bdev_nvme_set_options", 00:11:05.623 "bdev_passthru_delete", 00:11:05.623 "bdev_passthru_create", 00:11:05.623 "bdev_lvol_set_parent_bdev", 00:11:05.623 "bdev_lvol_set_parent", 00:11:05.623 "bdev_lvol_check_shallow_copy", 00:11:05.623 "bdev_lvol_start_shallow_copy", 00:11:05.623 "bdev_lvol_grow_lvstore", 00:11:05.623 "bdev_lvol_get_lvols", 00:11:05.623 "bdev_lvol_get_lvstores", 00:11:05.624 "bdev_lvol_delete", 00:11:05.624 "bdev_lvol_set_read_only", 00:11:05.624 "bdev_lvol_resize", 00:11:05.624 "bdev_lvol_decouple_parent", 00:11:05.624 "bdev_lvol_inflate", 00:11:05.624 "bdev_lvol_rename", 00:11:05.624 "bdev_lvol_clone_bdev", 00:11:05.624 "bdev_lvol_clone", 00:11:05.624 "bdev_lvol_snapshot", 00:11:05.624 "bdev_lvol_create", 00:11:05.624 "bdev_lvol_delete_lvstore", 00:11:05.624 "bdev_lvol_rename_lvstore", 00:11:05.624 "bdev_lvol_create_lvstore", 00:11:05.624 "bdev_raid_set_options", 00:11:05.624 "bdev_raid_remove_base_bdev", 00:11:05.624 "bdev_raid_add_base_bdev", 00:11:05.624 "bdev_raid_delete", 00:11:05.624 "bdev_raid_create", 00:11:05.624 "bdev_raid_get_bdevs", 00:11:05.624 "bdev_error_inject_error", 00:11:05.624 "bdev_error_delete", 00:11:05.624 "bdev_error_create", 00:11:05.624 "bdev_split_delete", 00:11:05.624 "bdev_split_create", 00:11:05.624 "bdev_delay_delete", 00:11:05.624 "bdev_delay_create", 00:11:05.624 "bdev_delay_update_latency", 00:11:05.624 "bdev_zone_block_delete", 00:11:05.624 "bdev_zone_block_create", 00:11:05.624 "blobfs_create", 00:11:05.624 "blobfs_detect", 00:11:05.624 "blobfs_set_cache_size", 00:11:05.624 "bdev_aio_delete", 00:11:05.624 "bdev_aio_rescan", 00:11:05.624 "bdev_aio_create", 00:11:05.624 "bdev_ftl_set_property", 00:11:05.624 "bdev_ftl_get_properties", 00:11:05.624 "bdev_ftl_get_stats", 00:11:05.624 "bdev_ftl_unmap", 00:11:05.624 "bdev_ftl_unload", 00:11:05.624 "bdev_ftl_delete", 00:11:05.624 "bdev_ftl_load", 00:11:05.624 "bdev_ftl_create", 00:11:05.624 "bdev_virtio_attach_controller", 00:11:05.624 "bdev_virtio_scsi_get_devices", 00:11:05.624 "bdev_virtio_detach_controller", 00:11:05.624 "bdev_virtio_blk_set_hotplug", 00:11:05.624 "bdev_iscsi_delete", 00:11:05.624 "bdev_iscsi_create", 00:11:05.624 "bdev_iscsi_set_options", 00:11:05.624 "accel_error_inject_error", 00:11:05.624 "ioat_scan_accel_module", 00:11:05.624 "dsa_scan_accel_module", 00:11:05.624 "iaa_scan_accel_module", 00:11:05.624 "vfu_virtio_create_scsi_endpoint", 00:11:05.624 "vfu_virtio_scsi_remove_target", 00:11:05.624 "vfu_virtio_scsi_add_target", 00:11:05.624 "vfu_virtio_create_blk_endpoint", 00:11:05.624 "vfu_virtio_delete_endpoint", 00:11:05.624 "keyring_file_remove_key", 00:11:05.624 "keyring_file_add_key", 00:11:05.624 "keyring_linux_set_options", 00:11:05.624 "iscsi_get_histogram", 00:11:05.624 "iscsi_enable_histogram", 00:11:05.624 "iscsi_set_options", 00:11:05.624 "iscsi_get_auth_groups", 00:11:05.624 "iscsi_auth_group_remove_secret", 00:11:05.624 "iscsi_auth_group_add_secret", 00:11:05.624 "iscsi_delete_auth_group", 00:11:05.624 "iscsi_create_auth_group", 00:11:05.624 "iscsi_set_discovery_auth", 00:11:05.624 "iscsi_get_options", 00:11:05.624 "iscsi_target_node_request_logout", 00:11:05.624 "iscsi_target_node_set_redirect", 00:11:05.624 "iscsi_target_node_set_auth", 00:11:05.624 "iscsi_target_node_add_lun", 00:11:05.624 "iscsi_get_stats", 00:11:05.624 "iscsi_get_connections", 00:11:05.624 "iscsi_portal_group_set_auth", 00:11:05.624 "iscsi_start_portal_group", 00:11:05.624 "iscsi_delete_portal_group", 00:11:05.624 "iscsi_create_portal_group", 00:11:05.624 "iscsi_get_portal_groups", 00:11:05.624 "iscsi_delete_target_node", 00:11:05.624 "iscsi_target_node_remove_pg_ig_maps", 00:11:05.624 "iscsi_target_node_add_pg_ig_maps", 00:11:05.624 "iscsi_create_target_node", 00:11:05.624 "iscsi_get_target_nodes", 00:11:05.624 "iscsi_delete_initiator_group", 00:11:05.624 "iscsi_initiator_group_remove_initiators", 00:11:05.624 "iscsi_initiator_group_add_initiators", 00:11:05.624 "iscsi_create_initiator_group", 00:11:05.624 "iscsi_get_initiator_groups", 00:11:05.624 "nvmf_set_crdt", 00:11:05.624 "nvmf_set_config", 00:11:05.624 "nvmf_set_max_subsystems", 00:11:05.624 "nvmf_stop_mdns_prr", 00:11:05.624 "nvmf_publish_mdns_prr", 00:11:05.624 "nvmf_subsystem_get_listeners", 00:11:05.624 "nvmf_subsystem_get_qpairs", 00:11:05.624 "nvmf_subsystem_get_controllers", 00:11:05.624 "nvmf_get_stats", 00:11:05.624 "nvmf_get_transports", 00:11:05.624 "nvmf_create_transport", 00:11:05.624 "nvmf_get_targets", 00:11:05.624 "nvmf_delete_target", 00:11:05.624 "nvmf_create_target", 00:11:05.624 "nvmf_subsystem_allow_any_host", 00:11:05.624 "nvmf_subsystem_remove_host", 00:11:05.624 "nvmf_subsystem_add_host", 00:11:05.624 "nvmf_ns_remove_host", 00:11:05.624 "nvmf_ns_add_host", 00:11:05.624 "nvmf_subsystem_remove_ns", 00:11:05.624 "nvmf_subsystem_add_ns", 00:11:05.624 "nvmf_subsystem_listener_set_ana_state", 00:11:05.624 "nvmf_discovery_get_referrals", 00:11:05.624 "nvmf_discovery_remove_referral", 00:11:05.624 "nvmf_discovery_add_referral", 00:11:05.624 "nvmf_subsystem_remove_listener", 00:11:05.624 "nvmf_subsystem_add_listener", 00:11:05.624 "nvmf_delete_subsystem", 00:11:05.624 "nvmf_create_subsystem", 00:11:05.624 "nvmf_get_subsystems", 00:11:05.624 "env_dpdk_get_mem_stats", 00:11:05.624 "nbd_get_disks", 00:11:05.624 "nbd_stop_disk", 00:11:05.624 "nbd_start_disk", 00:11:05.624 "ublk_recover_disk", 00:11:05.624 "ublk_get_disks", 00:11:05.624 "ublk_stop_disk", 00:11:05.624 "ublk_start_disk", 00:11:05.624 "ublk_destroy_target", 00:11:05.624 "ublk_create_target", 00:11:05.624 "virtio_blk_create_transport", 00:11:05.624 "virtio_blk_get_transports", 00:11:05.624 "vhost_controller_set_coalescing", 00:11:05.624 "vhost_get_controllers", 00:11:05.624 "vhost_delete_controller", 00:11:05.624 "vhost_create_blk_controller", 00:11:05.624 "vhost_scsi_controller_remove_target", 00:11:05.624 "vhost_scsi_controller_add_target", 00:11:05.624 "vhost_start_scsi_controller", 00:11:05.624 "vhost_create_scsi_controller", 00:11:05.624 "thread_set_cpumask", 00:11:05.624 "framework_get_scheduler", 00:11:05.624 "framework_set_scheduler", 00:11:05.624 "framework_get_reactors", 00:11:05.624 "thread_get_io_channels", 00:11:05.624 "thread_get_pollers", 00:11:05.624 "thread_get_stats", 00:11:05.624 "framework_monitor_context_switch", 00:11:05.624 "spdk_kill_instance", 00:11:05.624 "log_enable_timestamps", 00:11:05.624 "log_get_flags", 00:11:05.624 "log_clear_flag", 00:11:05.624 "log_set_flag", 00:11:05.624 "log_get_level", 00:11:05.624 "log_set_level", 00:11:05.624 "log_get_print_level", 00:11:05.624 "log_set_print_level", 00:11:05.624 "framework_enable_cpumask_locks", 00:11:05.624 "framework_disable_cpumask_locks", 00:11:05.624 "framework_wait_init", 00:11:05.624 "framework_start_init", 00:11:05.624 "scsi_get_devices", 00:11:05.624 "bdev_get_histogram", 00:11:05.624 "bdev_enable_histogram", 00:11:05.624 "bdev_set_qos_limit", 00:11:05.624 "bdev_set_qd_sampling_period", 00:11:05.624 "bdev_get_bdevs", 00:11:05.624 "bdev_reset_iostat", 00:11:05.624 "bdev_get_iostat", 00:11:05.624 "bdev_examine", 00:11:05.624 "bdev_wait_for_examine", 00:11:05.624 "bdev_set_options", 00:11:05.624 "notify_get_notifications", 00:11:05.624 "notify_get_types", 00:11:05.624 "accel_get_stats", 00:11:05.624 "accel_set_options", 00:11:05.624 "accel_set_driver", 00:11:05.624 "accel_crypto_key_destroy", 00:11:05.624 "accel_crypto_keys_get", 00:11:05.624 "accel_crypto_key_create", 00:11:05.624 "accel_assign_opc", 00:11:05.624 "accel_get_module_info", 00:11:05.624 "accel_get_opc_assignments", 00:11:05.624 "vmd_rescan", 00:11:05.624 "vmd_remove_device", 00:11:05.624 "vmd_enable", 00:11:05.625 "sock_get_default_impl", 00:11:05.625 "sock_set_default_impl", 00:11:05.625 "sock_impl_set_options", 00:11:05.625 "sock_impl_get_options", 00:11:05.625 "iobuf_get_stats", 00:11:05.625 "iobuf_set_options", 00:11:05.625 "keyring_get_keys", 00:11:05.625 "framework_get_pci_devices", 00:11:05.625 "framework_get_config", 00:11:05.625 "framework_get_subsystems", 00:11:05.625 "vfu_tgt_set_base_path", 00:11:05.625 "trace_get_info", 00:11:05.625 "trace_get_tpoint_group_mask", 00:11:05.625 "trace_disable_tpoint_group", 00:11:05.625 "trace_enable_tpoint_group", 00:11:05.625 "trace_clear_tpoint_mask", 00:11:05.625 "trace_set_tpoint_mask", 00:11:05.625 "spdk_get_version", 00:11:05.625 "rpc_get_methods" 00:11:05.625 ] 00:11:05.625 13:39:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:05.625 13:39:19 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:05.625 13:39:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.625 13:39:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:05.625 13:39:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1224862 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1224862 ']' 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1224862 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1224862 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1224862' 00:11:05.625 killing process with pid 1224862 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1224862 00:11:05.625 13:39:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1224862 00:11:06.193 00:11:06.193 real 0m1.741s 00:11:06.193 user 0m3.194s 00:11:06.193 sys 0m0.580s 00:11:06.193 13:39:20 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:06.193 13:39:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.193 ************************************ 00:11:06.193 END TEST spdkcli_tcp 00:11:06.193 ************************************ 00:11:06.193 13:39:20 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:06.193 13:39:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:06.193 13:39:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:06.194 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.194 ************************************ 00:11:06.194 START TEST dpdk_mem_utility 00:11:06.194 ************************************ 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:06.194 * Looking for test storage... 00:11:06.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:11:06.194 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:11:06.194 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1225249 00:11:06.194 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1225249 00:11:06.194 13:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1225249 ']' 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:06.194 13:39:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:06.453 [2024-06-10 13:39:20.666693] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:06.453 [2024-06-10 13:39:20.666764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225249 ] 00:11:06.453 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.453 [2024-06-10 13:39:20.789403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.453 [2024-06-10 13:39:20.875267] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.390 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:07.390 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:11:07.390 13:39:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:07.390 13:39:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:07.390 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:07.390 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:07.390 { 00:11:07.390 "filename": "/tmp/spdk_mem_dump.txt" 00:11:07.390 } 00:11:07.390 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:07.390 13:39:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:11:07.390 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:07.390 1 heaps totaling size 814.000000 MiB 00:11:07.390 size: 814.000000 MiB heap id: 0 00:11:07.390 end heaps---------- 00:11:07.390 8 mempools totaling size 598.116089 MiB 00:11:07.390 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:07.390 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:07.390 size: 84.521057 MiB name: bdev_io_1225249 00:11:07.390 size: 51.011292 MiB name: evtpool_1225249 00:11:07.390 size: 50.003479 MiB name: msgpool_1225249 00:11:07.390 size: 21.763794 MiB name: PDU_Pool 00:11:07.390 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:07.390 size: 0.026123 MiB name: Session_Pool 00:11:07.390 end mempools------- 00:11:07.390 6 memzones totaling size 4.142822 MiB 00:11:07.390 size: 1.000366 MiB name: RG_ring_0_1225249 00:11:07.390 size: 1.000366 MiB name: RG_ring_1_1225249 00:11:07.390 size: 1.000366 MiB name: RG_ring_4_1225249 00:11:07.390 size: 1.000366 MiB name: RG_ring_5_1225249 00:11:07.390 size: 0.125366 MiB name: RG_ring_2_1225249 00:11:07.390 size: 0.015991 MiB name: RG_ring_3_1225249 00:11:07.390 end memzones------- 00:11:07.390 13:39:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:11:07.390 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:11:07.390 list of free elements. size: 12.519348 MiB 00:11:07.390 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:07.390 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:07.390 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:07.390 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:07.390 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:07.390 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:07.390 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:07.390 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:07.390 element at address: 0x200000200000 with size: 0.841614 MiB 00:11:07.390 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:11:07.390 element at address: 0x20000b200000 with size: 0.490723 MiB 00:11:07.390 element at address: 0x200000800000 with size: 0.487793 MiB 00:11:07.390 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:07.390 element at address: 0x200027e00000 with size: 0.410034 MiB 00:11:07.390 element at address: 0x200003a00000 with size: 0.355530 MiB 00:11:07.390 list of standard malloc elements. size: 199.218079 MiB 00:11:07.390 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:07.390 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:07.390 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:07.390 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:07.390 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:07.390 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:07.390 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:07.390 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:07.390 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:07.390 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:07.390 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:07.390 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:07.390 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:07.390 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:07.391 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:07.391 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:07.391 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:07.391 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:07.391 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:07.391 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:07.391 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:07.391 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:07.391 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:07.391 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:07.391 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:11:07.391 element at address: 0x200027e69040 with size: 0.000183 MiB 00:11:07.391 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:11:07.391 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:07.391 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:07.391 list of memzone associated elements. size: 602.262573 MiB 00:11:07.391 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:07.391 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:07.391 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:07.391 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:07.391 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:07.391 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1225249_0 00:11:07.391 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:07.391 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1225249_0 00:11:07.391 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:07.391 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1225249_0 00:11:07.391 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:07.391 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:07.391 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:07.391 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:07.391 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:07.391 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1225249 00:11:07.391 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:07.391 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1225249 00:11:07.391 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:07.391 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1225249 00:11:07.391 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:07.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:07.391 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:07.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:07.391 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:07.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:07.391 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:07.391 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:07.391 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:07.391 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1225249 00:11:07.391 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:07.391 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1225249 00:11:07.391 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:07.391 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1225249 00:11:07.391 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:07.391 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1225249 00:11:07.391 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:07.391 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1225249 00:11:07.391 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:07.391 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:07.391 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:07.391 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:07.391 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:07.391 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:07.391 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:07.391 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1225249 00:11:07.391 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:07.391 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:07.391 element at address: 0x200027e69100 with size: 0.023743 MiB 00:11:07.391 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:07.391 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:07.391 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1225249 00:11:07.391 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:11:07.391 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:07.391 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:11:07.391 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1225249 00:11:07.391 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:07.391 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1225249 00:11:07.391 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:11:07.391 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:07.391 13:39:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:07.391 13:39:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1225249 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1225249 ']' 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1225249 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1225249 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1225249' 00:11:07.391 killing process with pid 1225249 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1225249 00:11:07.391 13:39:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1225249 00:11:07.651 00:11:07.651 real 0m1.600s 00:11:07.651 user 0m1.684s 00:11:07.651 sys 0m0.546s 00:11:07.651 13:39:22 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:07.651 13:39:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:07.651 ************************************ 00:11:07.651 END TEST dpdk_mem_utility 00:11:07.651 ************************************ 00:11:07.909 13:39:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:11:07.909 13:39:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:07.909 13:39:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:07.909 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:11:07.909 ************************************ 00:11:07.909 START TEST event 00:11:07.909 ************************************ 00:11:07.909 13:39:22 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:11:07.909 * Looking for test storage... 00:11:07.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:11:07.909 13:39:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:07.909 13:39:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:07.909 13:39:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:07.909 13:39:22 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:11:07.909 13:39:22 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:07.909 13:39:22 event -- common/autotest_common.sh@10 -- # set +x 00:11:07.909 ************************************ 00:11:07.909 START TEST event_perf 00:11:07.909 ************************************ 00:11:07.909 13:39:22 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:07.909 Running I/O for 1 seconds...[2024-06-10 13:39:22.354739] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:07.909 [2024-06-10 13:39:22.354821] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225676 ] 00:11:08.168 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.168 [2024-06-10 13:39:22.463022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.168 [2024-06-10 13:39:22.550212] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.168 [2024-06-10 13:39:22.550306] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.168 [2024-06-10 13:39:22.550418] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.168 [2024-06-10 13:39:22.550419] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.544 Running I/O for 1 seconds... 00:11:09.544 lcore 0: 175973 00:11:09.544 lcore 1: 175972 00:11:09.544 lcore 2: 175972 00:11:09.544 lcore 3: 175974 00:11:09.544 done. 00:11:09.544 00:11:09.544 real 0m1.298s 00:11:09.544 user 0m4.180s 00:11:09.544 sys 0m0.113s 00:11:09.544 13:39:23 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:09.544 13:39:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:09.544 ************************************ 00:11:09.544 END TEST event_perf 00:11:09.544 ************************************ 00:11:09.544 13:39:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:11:09.544 13:39:23 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:11:09.544 13:39:23 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:09.544 13:39:23 event -- common/autotest_common.sh@10 -- # set +x 00:11:09.544 ************************************ 00:11:09.544 START TEST event_reactor 00:11:09.544 ************************************ 00:11:09.544 13:39:23 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:11:09.544 [2024-06-10 13:39:23.731973] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:09.544 [2024-06-10 13:39:23.732056] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225891 ] 00:11:09.544 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.544 [2024-06-10 13:39:23.855656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.544 [2024-06-10 13:39:23.938091] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.922 test_start 00:11:10.922 oneshot 00:11:10.922 tick 100 00:11:10.922 tick 100 00:11:10.922 tick 250 00:11:10.922 tick 100 00:11:10.922 tick 100 00:11:10.922 tick 250 00:11:10.922 tick 100 00:11:10.922 tick 500 00:11:10.922 tick 100 00:11:10.922 tick 100 00:11:10.922 tick 250 00:11:10.922 tick 100 00:11:10.922 tick 100 00:11:10.922 test_end 00:11:10.922 00:11:10.922 real 0m1.304s 00:11:10.922 user 0m1.164s 00:11:10.922 sys 0m0.134s 00:11:10.922 13:39:25 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:10.922 13:39:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:10.922 ************************************ 00:11:10.922 END TEST event_reactor 00:11:10.922 ************************************ 00:11:10.922 13:39:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:10.922 13:39:25 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:11:10.922 13:39:25 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:10.922 13:39:25 event -- common/autotest_common.sh@10 -- # set +x 00:11:10.922 ************************************ 00:11:10.922 START TEST event_reactor_perf 00:11:10.922 ************************************ 00:11:10.922 13:39:25 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:10.922 [2024-06-10 13:39:25.117142] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:10.922 [2024-06-10 13:39:25.117224] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226101 ] 00:11:10.922 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.922 [2024-06-10 13:39:25.240434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.922 [2024-06-10 13:39:25.326220] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.300 test_start 00:11:12.300 test_end 00:11:12.300 Performance: 351927 events per second 00:11:12.300 00:11:12.300 real 0m1.309s 00:11:12.300 user 0m1.171s 00:11:12.300 sys 0m0.133s 00:11:12.300 13:39:26 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:12.300 13:39:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 ************************************ 00:11:12.300 END TEST event_reactor_perf 00:11:12.300 ************************************ 00:11:12.300 13:39:26 event -- event/event.sh@49 -- # uname -s 00:11:12.300 13:39:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:12.300 13:39:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:11:12.300 13:39:26 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:12.300 13:39:26 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:12.300 13:39:26 event -- common/autotest_common.sh@10 -- # set +x 00:11:12.300 ************************************ 00:11:12.300 START TEST event_scheduler 00:11:12.300 ************************************ 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:11:12.301 * Looking for test storage... 00:11:12.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:11:12.301 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:12.301 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1226421 00:11:12.301 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:12.301 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:12.301 13:39:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1226421 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1226421 ']' 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:12.301 13:39:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:12.301 [2024-06-10 13:39:26.647239] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:12.301 [2024-06-10 13:39:26.647310] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226421 ] 00:11:12.301 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.301 [2024-06-10 13:39:26.742732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.560 [2024-06-10 13:39:26.820309] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.560 [2024-06-10 13:39:26.820395] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.560 [2024-06-10 13:39:26.820504] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.560 [2024-06-10 13:39:26.820505] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.128 13:39:27 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:13.129 13:39:27 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:11:13.129 13:39:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:13.129 13:39:27 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.129 13:39:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:13.129 POWER: Env isn't set yet! 00:11:13.129 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:13.129 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:13.129 POWER: Cannot set governor of lcore 0 to userspace 00:11:13.129 POWER: Attempting to initialise PSTAT power management... 00:11:13.129 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:11:13.129 POWER: Initialized successfully for lcore 0 power management 00:11:13.129 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:11:13.129 POWER: Initialized successfully for lcore 1 power management 00:11:13.129 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:11:13.129 POWER: Initialized successfully for lcore 2 power management 00:11:13.129 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:11:13.129 POWER: Initialized successfully for lcore 3 power management 00:11:13.129 [2024-06-10 13:39:27.594740] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:13.129 [2024-06-10 13:39:27.594757] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:13.129 [2024-06-10 13:39:27.594767] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:13.129 13:39:27 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.129 13:39:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:13.129 13:39:27 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.129 13:39:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 [2024-06-10 13:39:27.668562] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:13.388 13:39:27 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:13.388 13:39:27 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:13.388 13:39:27 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 ************************************ 00:11:13.388 START TEST scheduler_create_thread 00:11:13.388 ************************************ 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 2 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 3 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 4 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 5 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 6 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 7 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 8 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 9 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 10 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.388 13:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:14.324 13:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.324 13:39:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:14.324 13:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.324 13:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.701 13:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.701 13:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:15.701 13:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:15.701 13:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.701 13:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:16.637 13:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.638 00:11:16.638 real 0m3.381s 00:11:16.638 user 0m0.023s 00:11:16.638 sys 0m0.009s 00:11:16.638 13:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:16.638 13:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:16.638 ************************************ 00:11:16.638 END TEST scheduler_create_thread 00:11:16.638 ************************************ 00:11:16.896 13:39:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:16.896 13:39:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1226421 00:11:16.896 13:39:31 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1226421 ']' 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1226421 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1226421 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1226421' 00:11:16.897 killing process with pid 1226421 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1226421 00:11:16.897 13:39:31 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1226421 00:11:17.190 [2024-06-10 13:39:31.472947] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:17.190 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:11:17.190 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:11:17.190 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:11:17.190 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:11:17.190 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:11:17.190 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:11:17.190 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:11:17.190 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:11:17.475 00:11:17.475 real 0m5.210s 00:11:17.475 user 0m10.855s 00:11:17.475 sys 0m0.471s 00:11:17.475 13:39:31 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:17.475 13:39:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 ************************************ 00:11:17.475 END TEST event_scheduler 00:11:17.475 ************************************ 00:11:17.475 13:39:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:17.475 13:39:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:17.475 13:39:31 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:17.475 13:39:31 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:17.475 13:39:31 event -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 ************************************ 00:11:17.475 START TEST app_repeat 00:11:17.475 ************************************ 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1227514 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1227514' 00:11:17.475 Process app_repeat pid: 1227514 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:17.475 spdk_app_start Round 0 00:11:17.475 13:39:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1227514 /var/tmp/spdk-nbd.sock 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1227514 ']' 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:17.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:17.475 13:39:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:17.475 [2024-06-10 13:39:31.832054] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:17.475 [2024-06-10 13:39:31.832121] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227514 ] 00:11:17.475 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.734 [2024-06-10 13:39:31.956013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.734 [2024-06-10 13:39:32.044487] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.734 [2024-06-10 13:39:32.044492] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.302 13:39:32 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:18.302 13:39:32 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:11:18.302 13:39:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.561 Malloc0 00:11:18.561 13:39:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.820 Malloc1 00:11:18.820 13:39:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.820 13:39:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:19.080 /dev/nbd0 00:11:19.080 13:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:19.080 13:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.080 1+0 records in 00:11:19.080 1+0 records out 00:11:19.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229467 s, 17.9 MB/s 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:11:19.080 13:39:33 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:11:19.080 13:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.080 13:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.080 13:39:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:19.339 /dev/nbd1 00:11:19.339 13:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:19.339 13:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.339 1+0 records in 00:11:19.339 1+0 records out 00:11:19.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237297 s, 17.3 MB/s 00:11:19.339 13:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:19.599 13:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:11:19.599 13:39:33 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:19.599 13:39:33 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:11:19.599 13:39:33 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:11:19.599 13:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.599 13:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.599 13:39:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.599 13:39:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.599 13:39:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.599 13:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:19.599 { 00:11:19.599 "nbd_device": "/dev/nbd0", 00:11:19.599 "bdev_name": "Malloc0" 00:11:19.599 }, 00:11:19.599 { 00:11:19.599 "nbd_device": "/dev/nbd1", 00:11:19.599 "bdev_name": "Malloc1" 00:11:19.599 } 00:11:19.599 ]' 00:11:19.599 13:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:19.599 { 00:11:19.599 "nbd_device": "/dev/nbd0", 00:11:19.599 "bdev_name": "Malloc0" 00:11:19.599 }, 00:11:19.599 { 00:11:19.599 "nbd_device": "/dev/nbd1", 00:11:19.599 "bdev_name": "Malloc1" 00:11:19.599 } 00:11:19.599 ]' 00:11:19.599 13:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:19.859 /dev/nbd1' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:19.859 /dev/nbd1' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:19.859 256+0 records in 00:11:19.859 256+0 records out 00:11:19.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102492 s, 102 MB/s 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:19.859 256+0 records in 00:11:19.859 256+0 records out 00:11:19.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273999 s, 38.3 MB/s 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:19.859 256+0 records in 00:11:19.859 256+0 records out 00:11:19.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179356 s, 58.5 MB/s 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.859 13:39:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.118 13:39:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.378 13:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:20.637 13:39:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:20.637 13:39:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:20.897 13:39:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:21.156 [2024-06-10 13:39:35.444644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.156 [2024-06-10 13:39:35.522015] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.156 [2024-06-10 13:39:35.522020] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.156 [2024-06-10 13:39:35.566864] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:21.156 [2024-06-10 13:39:35.566911] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:24.446 13:39:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:24.446 13:39:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:24.446 spdk_app_start Round 1 00:11:24.446 13:39:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1227514 /var/tmp/spdk-nbd.sock 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1227514 ']' 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:24.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:24.446 13:39:38 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:11:24.446 13:39:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:24.446 Malloc0 00:11:24.446 13:39:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:24.446 Malloc1 00:11:24.706 13:39:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.706 13:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:24.706 /dev/nbd0 00:11:24.706 13:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:24.706 13:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:11:24.706 13:39:39 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:24.965 1+0 records in 00:11:24.965 1+0 records out 00:11:24.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229395 s, 17.9 MB/s 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:11:24.965 13:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.965 13:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.965 13:39:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:24.965 /dev/nbd1 00:11:24.965 13:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:24.965 13:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:24.965 13:39:39 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:11:25.224 13:39:39 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:11:25.224 13:39:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:11:25.224 13:39:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:11:25.225 13:39:39 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:25.225 1+0 records in 00:11:25.225 1+0 records out 00:11:25.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023839 s, 17.2 MB/s 00:11:25.225 13:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:25.225 13:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:11:25.225 13:39:39 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:25.225 13:39:39 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:11:25.225 13:39:39 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:25.225 { 00:11:25.225 "nbd_device": "/dev/nbd0", 00:11:25.225 "bdev_name": "Malloc0" 00:11:25.225 }, 00:11:25.225 { 00:11:25.225 "nbd_device": "/dev/nbd1", 00:11:25.225 "bdev_name": "Malloc1" 00:11:25.225 } 00:11:25.225 ]' 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:25.225 { 00:11:25.225 "nbd_device": "/dev/nbd0", 00:11:25.225 "bdev_name": "Malloc0" 00:11:25.225 }, 00:11:25.225 { 00:11:25.225 "nbd_device": "/dev/nbd1", 00:11:25.225 "bdev_name": "Malloc1" 00:11:25.225 } 00:11:25.225 ]' 00:11:25.225 13:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:25.484 /dev/nbd1' 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:25.484 /dev/nbd1' 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:25.484 256+0 records in 00:11:25.484 256+0 records out 00:11:25.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00711519 s, 147 MB/s 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:25.484 256+0 records in 00:11:25.484 256+0 records out 00:11:25.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167231 s, 62.7 MB/s 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:25.484 256+0 records in 00:11:25.484 256+0 records out 00:11:25.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208891 s, 50.2 MB/s 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:25.484 13:39:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.485 13:39:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.744 13:39:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.005 13:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:26.264 13:39:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:26.264 13:39:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:26.524 13:39:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:26.783 [2024-06-10 13:39:41.077840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.783 [2024-06-10 13:39:41.154492] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.783 [2024-06-10 13:39:41.154496] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.783 [2024-06-10 13:39:41.200441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:26.783 [2024-06-10 13:39:41.200489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:30.074 13:39:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:30.074 13:39:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:30.074 spdk_app_start Round 2 00:11:30.074 13:39:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1227514 /var/tmp/spdk-nbd.sock 00:11:30.074 13:39:43 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1227514 ']' 00:11:30.074 13:39:43 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:30.074 13:39:43 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:30.074 13:39:43 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:30.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:30.074 13:39:43 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:30.074 13:39:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:30.074 13:39:44 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:30.074 13:39:44 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:11:30.074 13:39:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:30.074 Malloc0 00:11:30.074 13:39:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:30.333 Malloc1 00:11:30.333 13:39:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:30.333 /dev/nbd0 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:30.333 13:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:11:30.333 13:39:44 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:30.593 1+0 records in 00:11:30.593 1+0 records out 00:11:30.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246311 s, 16.6 MB/s 00:11:30.593 13:39:44 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:30.593 13:39:44 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:11:30.593 13:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:30.593 13:39:44 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:11:30.593 13:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:11:30.593 13:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.593 13:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:30.593 13:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:30.593 /dev/nbd1 00:11:30.593 13:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:30.852 13:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:30.852 1+0 records in 00:11:30.852 1+0 records out 00:11:30.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240958 s, 17.0 MB/s 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:30.852 13:39:45 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:11:30.853 13:39:45 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:30.853 13:39:45 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:11:30.853 13:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:30.853 { 00:11:30.853 "nbd_device": "/dev/nbd0", 00:11:30.853 "bdev_name": "Malloc0" 00:11:30.853 }, 00:11:30.853 { 00:11:30.853 "nbd_device": "/dev/nbd1", 00:11:30.853 "bdev_name": "Malloc1" 00:11:30.853 } 00:11:30.853 ]' 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:30.853 { 00:11:30.853 "nbd_device": "/dev/nbd0", 00:11:30.853 "bdev_name": "Malloc0" 00:11:30.853 }, 00:11:30.853 { 00:11:30.853 "nbd_device": "/dev/nbd1", 00:11:30.853 "bdev_name": "Malloc1" 00:11:30.853 } 00:11:30.853 ]' 00:11:30.853 13:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:31.112 /dev/nbd1' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:31.112 /dev/nbd1' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:31.112 256+0 records in 00:11:31.112 256+0 records out 00:11:31.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114425 s, 91.6 MB/s 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:31.112 256+0 records in 00:11:31.112 256+0 records out 00:11:31.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168612 s, 62.2 MB/s 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:31.112 256+0 records in 00:11:31.112 256+0 records out 00:11:31.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257154 s, 40.8 MB/s 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.112 13:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.372 13:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.631 13:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:31.890 13:39:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:31.890 13:39:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:32.150 13:39:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:32.410 [2024-06-10 13:39:46.715271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.410 [2024-06-10 13:39:46.792452] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.410 [2024-06-10 13:39:46.792456] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.410 [2024-06-10 13:39:46.837356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:32.410 [2024-06-10 13:39:46.837404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:35.701 13:39:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1227514 /var/tmp/spdk-nbd.sock 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1227514 ']' 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:35.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:11:35.701 13:39:49 event.app_repeat -- event/event.sh@39 -- # killprocess 1227514 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1227514 ']' 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1227514 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1227514 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1227514' 00:11:35.701 killing process with pid 1227514 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1227514 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1227514 00:11:35.701 spdk_app_start is called in Round 0. 00:11:35.701 Shutdown signal received, stop current app iteration 00:11:35.701 Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 reinitialization... 00:11:35.701 spdk_app_start is called in Round 1. 00:11:35.701 Shutdown signal received, stop current app iteration 00:11:35.701 Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 reinitialization... 00:11:35.701 spdk_app_start is called in Round 2. 00:11:35.701 Shutdown signal received, stop current app iteration 00:11:35.701 Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 reinitialization... 00:11:35.701 spdk_app_start is called in Round 3. 00:11:35.701 Shutdown signal received, stop current app iteration 00:11:35.701 13:39:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:35.701 13:39:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:35.701 00:11:35.701 real 0m18.157s 00:11:35.701 user 0m39.321s 00:11:35.701 sys 0m3.607s 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:35.701 13:39:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 ************************************ 00:11:35.701 END TEST app_repeat 00:11:35.701 ************************************ 00:11:35.701 13:39:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:35.701 13:39:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:35.701 13:39:49 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:35.701 13:39:49 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:35.701 13:39:49 event -- common/autotest_common.sh@10 -- # set +x 00:11:35.701 ************************************ 00:11:35.701 START TEST cpu_locks 00:11:35.701 ************************************ 00:11:35.701 13:39:50 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:35.701 * Looking for test storage... 00:11:35.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:11:35.701 13:39:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:35.701 13:39:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:35.701 13:39:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:35.701 13:39:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:35.701 13:39:50 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:35.701 13:39:50 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:35.701 13:39:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.961 ************************************ 00:11:35.961 START TEST default_locks 00:11:35.961 ************************************ 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1230863 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1230863 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1230863 ']' 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:35.961 13:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.961 [2024-06-10 13:39:50.238310] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:35.961 [2024-06-10 13:39:50.238368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230863 ] 00:11:35.961 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.961 [2024-06-10 13:39:50.347426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.961 [2024-06-10 13:39:50.431614] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.899 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:36.899 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:11:36.899 13:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1230863 00:11:36.899 13:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1230863 00:11:36.899 13:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:37.467 lslocks: write error 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1230863 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1230863 ']' 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1230863 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1230863 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:37.467 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1230863' 00:11:37.468 killing process with pid 1230863 00:11:37.468 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1230863 00:11:37.468 13:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1230863 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1230863 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1230863 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1230863 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1230863 ']' 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:37.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1230863) - No such process 00:11:37.727 ERROR: process (pid: 1230863) is no longer running 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:37.727 00:11:37.727 real 0m1.965s 00:11:37.727 user 0m2.114s 00:11:37.727 sys 0m0.725s 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:37.727 13:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:37.727 ************************************ 00:11:37.727 END TEST default_locks 00:11:37.727 ************************************ 00:11:37.727 13:39:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:37.727 13:39:52 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:37.727 13:39:52 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:37.727 13:39:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:37.987 ************************************ 00:11:37.987 START TEST default_locks_via_rpc 00:11:37.987 ************************************ 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1231238 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1231238 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1231238 ']' 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:37.987 13:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.987 [2024-06-10 13:39:52.285176] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:37.987 [2024-06-10 13:39:52.285233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231238 ] 00:11:37.987 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.987 [2024-06-10 13:39:52.406097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.246 [2024-06-10 13:39:52.491596] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1231238 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1231238 00:11:38.814 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1231238 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1231238 ']' 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1231238 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1231238 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1231238' 00:11:39.381 killing process with pid 1231238 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1231238 00:11:39.381 13:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1231238 00:11:39.949 00:11:39.949 real 0m1.931s 00:11:39.949 user 0m2.064s 00:11:39.949 sys 0m0.717s 00:11:39.949 13:39:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:39.949 13:39:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.949 ************************************ 00:11:39.949 END TEST default_locks_via_rpc 00:11:39.949 ************************************ 00:11:39.949 13:39:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:39.949 13:39:54 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:39.949 13:39:54 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:39.949 13:39:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:39.949 ************************************ 00:11:39.949 START TEST non_locking_app_on_locked_coremask 00:11:39.949 ************************************ 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1231544 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1231544 /var/tmp/spdk.sock 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1231544 ']' 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:39.949 13:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:39.949 [2024-06-10 13:39:54.295423] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:39.949 [2024-06-10 13:39:54.295479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231544 ] 00:11:39.949 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.949 [2024-06-10 13:39:54.405227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.208 [2024-06-10 13:39:54.491691] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1231800 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1231800 /var/tmp/spdk2.sock 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1231800 ']' 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:40.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:40.776 13:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.036 [2024-06-10 13:39:55.249953] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:41.036 [2024-06-10 13:39:55.250018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231800 ] 00:11:41.036 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.036 [2024-06-10 13:39:55.412500] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:41.036 [2024-06-10 13:39:55.412527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.295 [2024-06-10 13:39:55.579066] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.864 13:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:41.864 13:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:41.864 13:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1231544 00:11:41.864 13:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1231544 00:11:41.864 13:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:42.800 lslocks: write error 00:11:42.800 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1231544 00:11:42.800 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1231544 ']' 00:11:42.800 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1231544 00:11:42.800 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:42.800 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:42.800 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1231544 00:11:43.111 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:43.111 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:43.111 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1231544' 00:11:43.111 killing process with pid 1231544 00:11:43.111 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1231544 00:11:43.111 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1231544 00:11:43.680 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1231800 00:11:43.680 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1231800 ']' 00:11:43.680 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1231800 00:11:43.680 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:43.680 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:43.680 13:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1231800 00:11:43.680 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:43.680 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:43.680 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1231800' 00:11:43.680 killing process with pid 1231800 00:11:43.680 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1231800 00:11:43.680 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1231800 00:11:43.940 00:11:43.940 real 0m4.122s 00:11:43.940 user 0m4.466s 00:11:43.940 sys 0m1.434s 00:11:43.940 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:43.940 13:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:43.940 ************************************ 00:11:43.940 END TEST non_locking_app_on_locked_coremask 00:11:43.940 ************************************ 00:11:43.940 13:39:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:43.940 13:39:58 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:43.940 13:39:58 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:43.940 13:39:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.199 ************************************ 00:11:44.200 START TEST locking_app_on_unlocked_coremask 00:11:44.200 ************************************ 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1232375 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1232375 /var/tmp/spdk.sock 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1232375 ']' 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:44.200 13:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.200 [2024-06-10 13:39:58.492558] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:44.200 [2024-06-10 13:39:58.492627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232375 ] 00:11:44.200 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.200 [2024-06-10 13:39:58.611921] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:44.200 [2024-06-10 13:39:58.611951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.459 [2024-06-10 13:39:58.698280] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.027 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:45.027 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:45.027 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1232522 00:11:45.027 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1232522 /var/tmp/spdk2.sock 00:11:45.027 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:45.028 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1232522 ']' 00:11:45.028 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:45.028 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:45.028 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:45.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:45.028 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:45.028 13:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.028 [2024-06-10 13:39:59.449448] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:45.028 [2024-06-10 13:39:59.449516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232522 ] 00:11:45.287 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.287 [2024-06-10 13:39:59.612922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.546 [2024-06-10 13:39:59.782769] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.114 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:46.114 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:46.114 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1232522 00:11:46.114 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1232522 00:11:46.114 13:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:47.050 lslocks: write error 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1232375 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1232375 ']' 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1232375 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1232375 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1232375' 00:11:47.050 killing process with pid 1232375 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1232375 00:11:47.050 13:40:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1232375 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1232522 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1232522 ']' 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1232522 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1232522 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1232522' 00:11:47.987 killing process with pid 1232522 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1232522 00:11:47.987 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1232522 00:11:48.245 00:11:48.245 real 0m4.088s 00:11:48.245 user 0m4.470s 00:11:48.245 sys 0m1.371s 00:11:48.245 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.246 ************************************ 00:11:48.246 END TEST locking_app_on_unlocked_coremask 00:11:48.246 ************************************ 00:11:48.246 13:40:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:48.246 13:40:02 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:48.246 13:40:02 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:48.246 13:40:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:48.246 ************************************ 00:11:48.246 START TEST locking_app_on_locked_coremask 00:11:48.246 ************************************ 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1233147 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1233147 /var/tmp/spdk.sock 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1233147 ']' 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:48.246 13:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.246 [2024-06-10 13:40:02.663401] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:48.246 [2024-06-10 13:40:02.663449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233147 ] 00:11:48.246 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.505 [2024-06-10 13:40:02.767284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.505 [2024-06-10 13:40:02.849345] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1233225 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1233225 /var/tmp/spdk2.sock 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1233225 /var/tmp/spdk2.sock 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1233225 /var/tmp/spdk2.sock 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1233225 ']' 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:49.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:49.074 13:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:49.333 [2024-06-10 13:40:03.566492] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:49.333 [2024-06-10 13:40:03.566543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233225 ] 00:11:49.333 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.333 [2024-06-10 13:40:03.714555] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1233147 has claimed it. 00:11:49.333 [2024-06-10 13:40:03.714611] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:49.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1233225) - No such process 00:11:49.901 ERROR: process (pid: 1233225) is no longer running 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1233147 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1233147 00:11:49.901 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:50.470 lslocks: write error 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1233147 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1233147 ']' 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1233147 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1233147 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1233147' 00:11:50.470 killing process with pid 1233147 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1233147 00:11:50.470 13:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1233147 00:11:50.730 00:11:50.730 real 0m2.459s 00:11:50.730 user 0m2.657s 00:11:50.730 sys 0m0.811s 00:11:50.730 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:50.730 13:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.730 ************************************ 00:11:50.730 END TEST locking_app_on_locked_coremask 00:11:50.730 ************************************ 00:11:50.730 13:40:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:50.730 13:40:05 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:50.730 13:40:05 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:50.730 13:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.730 ************************************ 00:11:50.730 START TEST locking_overlapped_coremask 00:11:50.730 ************************************ 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1233521 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1233521 /var/tmp/spdk.sock 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1233521 ']' 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:50.730 13:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.989 [2024-06-10 13:40:05.205174] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:50.989 [2024-06-10 13:40:05.205230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233521 ] 00:11:50.989 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.989 [2024-06-10 13:40:05.325516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.989 [2024-06-10 13:40:05.413136] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.989 [2024-06-10 13:40:05.413231] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.989 [2024-06-10 13:40:05.413235] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1233784 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1233784 /var/tmp/spdk2.sock 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1233784 /var/tmp/spdk2.sock 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1233784 /var/tmp/spdk2.sock 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1233784 ']' 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:51.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:51.927 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.927 [2024-06-10 13:40:06.165587] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:51.927 [2024-06-10 13:40:06.165653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233784 ] 00:11:51.927 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.927 [2024-06-10 13:40:06.298442] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1233521 has claimed it. 00:11:51.927 [2024-06-10 13:40:06.298482] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:52.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1233784) - No such process 00:11:52.495 ERROR: process (pid: 1233784) is no longer running 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1233521 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1233521 ']' 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1233521 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1233521 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:52.495 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1233521' 00:11:52.496 killing process with pid 1233521 00:11:52.496 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1233521 00:11:52.496 13:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1233521 00:11:53.064 00:11:53.064 real 0m2.083s 00:11:53.064 user 0m5.785s 00:11:53.064 sys 0m0.554s 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.064 ************************************ 00:11:53.064 END TEST locking_overlapped_coremask 00:11:53.064 ************************************ 00:11:53.064 13:40:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:53.064 13:40:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:53.064 13:40:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:53.064 13:40:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:53.064 ************************************ 00:11:53.064 START TEST locking_overlapped_coremask_via_rpc 00:11:53.064 ************************************ 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1234078 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1234078 /var/tmp/spdk.sock 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1234078 ']' 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.064 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:53.065 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.065 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:53.065 13:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.065 [2024-06-10 13:40:07.371558] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:53.065 [2024-06-10 13:40:07.371636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234078 ] 00:11:53.065 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.065 [2024-06-10 13:40:07.490074] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:53.065 [2024-06-10 13:40:07.490102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.325 [2024-06-10 13:40:07.577439] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.325 [2024-06-10 13:40:07.577534] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.325 [2024-06-10 13:40:07.577538] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1234102 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1234102 /var/tmp/spdk2.sock 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1234102 ']' 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:53.893 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:53.894 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:53.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:53.894 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:53.894 13:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.894 [2024-06-10 13:40:08.337566] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:53.894 [2024-06-10 13:40:08.337639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234102 ] 00:11:54.153 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.153 [2024-06-10 13:40:08.472814] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:54.153 [2024-06-10 13:40:08.472842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.153 [2024-06-10 13:40:08.619547] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.153 [2024-06-10 13:40:08.622623] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.153 [2024-06-10 13:40:08.622624] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:55.089 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.090 [2024-06-10 13:40:09.274650] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1234078 has claimed it. 00:11:55.090 request: 00:11:55.090 { 00:11:55.090 "method": "framework_enable_cpumask_locks", 00:11:55.090 "req_id": 1 00:11:55.090 } 00:11:55.090 Got JSON-RPC error response 00:11:55.090 response: 00:11:55.090 { 00:11:55.090 "code": -32603, 00:11:55.090 "message": "Failed to claim CPU core: 2" 00:11:55.090 } 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1234078 /var/tmp/spdk.sock 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1234078 ']' 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1234102 /var/tmp/spdk2.sock 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1234102 ']' 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:55.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:55.090 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:55.349 00:11:55.349 real 0m2.452s 00:11:55.349 user 0m1.112s 00:11:55.349 sys 0m0.263s 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:55.349 13:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.349 ************************************ 00:11:55.349 END TEST locking_overlapped_coremask_via_rpc 00:11:55.349 ************************************ 00:11:55.349 13:40:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:55.349 13:40:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1234078 ]] 00:11:55.349 13:40:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1234078 00:11:55.349 13:40:09 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1234078 ']' 00:11:55.349 13:40:09 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1234078 00:11:55.349 13:40:09 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:11:55.349 13:40:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:55.349 13:40:09 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1234078 00:11:55.608 13:40:09 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:55.608 13:40:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:55.608 13:40:09 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1234078' 00:11:55.608 killing process with pid 1234078 00:11:55.608 13:40:09 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1234078 00:11:55.608 13:40:09 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1234078 00:11:55.867 13:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1234102 ]] 00:11:55.867 13:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1234102 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1234102 ']' 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1234102 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1234102 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1234102' 00:11:55.867 killing process with pid 1234102 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1234102 00:11:55.867 13:40:10 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1234102 00:11:56.126 13:40:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:56.126 13:40:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:56.126 13:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1234078 ]] 00:11:56.126 13:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1234078 00:11:56.126 13:40:10 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1234078 ']' 00:11:56.126 13:40:10 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1234078 00:11:56.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1234078) - No such process 00:11:56.126 13:40:10 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1234078 is not found' 00:11:56.126 Process with pid 1234078 is not found 00:11:56.126 13:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1234102 ]] 00:11:56.126 13:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1234102 00:11:56.126 13:40:10 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1234102 ']' 00:11:56.127 13:40:10 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1234102 00:11:56.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1234102) - No such process 00:11:56.127 13:40:10 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1234102 is not found' 00:11:56.127 Process with pid 1234102 is not found 00:11:56.127 13:40:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:56.127 00:11:56.127 real 0m20.555s 00:11:56.127 user 0m34.598s 00:11:56.127 sys 0m7.033s 00:11:56.127 13:40:10 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:56.127 13:40:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:56.127 ************************************ 00:11:56.127 END TEST cpu_locks 00:11:56.127 ************************************ 00:11:56.386 00:11:56.386 real 0m48.457s 00:11:56.386 user 1m31.517s 00:11:56.386 sys 0m11.940s 00:11:56.386 13:40:10 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:56.386 13:40:10 event -- common/autotest_common.sh@10 -- # set +x 00:11:56.386 ************************************ 00:11:56.386 END TEST event 00:11:56.386 ************************************ 00:11:56.386 13:40:10 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:56.386 13:40:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:56.386 13:40:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:56.386 13:40:10 -- common/autotest_common.sh@10 -- # set +x 00:11:56.386 ************************************ 00:11:56.386 START TEST thread 00:11:56.386 ************************************ 00:11:56.386 13:40:10 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:56.386 * Looking for test storage... 00:11:56.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:11:56.386 13:40:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:56.386 13:40:10 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:11:56.386 13:40:10 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:56.386 13:40:10 thread -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 ************************************ 00:11:56.646 START TEST thread_poller_perf 00:11:56.646 ************************************ 00:11:56.646 13:40:10 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:56.646 [2024-06-10 13:40:10.893756] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:56.646 [2024-06-10 13:40:10.893825] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234717 ] 00:11:56.646 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.646 [2024-06-10 13:40:11.017142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.646 [2024-06-10 13:40:11.099352] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.646 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:58.025 ====================================== 00:11:58.025 busy:2510501844 (cyc) 00:11:58.025 total_run_count: 290000 00:11:58.025 tsc_hz: 2500000000 (cyc) 00:11:58.025 ====================================== 00:11:58.025 poller_cost: 8656 (cyc), 3462 (nsec) 00:11:58.025 00:11:58.025 real 0m1.311s 00:11:58.025 user 0m1.181s 00:11:58.025 sys 0m0.124s 00:11:58.025 13:40:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:58.025 13:40:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:58.025 ************************************ 00:11:58.025 END TEST thread_poller_perf 00:11:58.025 ************************************ 00:11:58.025 13:40:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:58.025 13:40:12 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:11:58.025 13:40:12 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:58.025 13:40:12 thread -- common/autotest_common.sh@10 -- # set +x 00:11:58.025 ************************************ 00:11:58.025 START TEST thread_poller_perf 00:11:58.025 ************************************ 00:11:58.025 13:40:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:58.025 [2024-06-10 13:40:12.279031] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:58.025 [2024-06-10 13:40:12.279087] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235007 ] 00:11:58.025 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.025 [2024-06-10 13:40:12.400091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.025 [2024-06-10 13:40:12.481261] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.025 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:59.404 ====================================== 00:11:59.404 busy:2502371946 (cyc) 00:11:59.404 total_run_count: 3819000 00:11:59.404 tsc_hz: 2500000000 (cyc) 00:11:59.404 ====================================== 00:11:59.404 poller_cost: 655 (cyc), 262 (nsec) 00:11:59.404 00:11:59.404 real 0m1.290s 00:11:59.404 user 0m1.165s 00:11:59.404 sys 0m0.120s 00:11:59.404 13:40:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:59.404 13:40:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 ************************************ 00:11:59.404 END TEST thread_poller_perf 00:11:59.404 ************************************ 00:11:59.404 13:40:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:59.404 00:11:59.404 real 0m2.881s 00:11:59.404 user 0m2.443s 00:11:59.404 sys 0m0.448s 00:11:59.404 13:40:13 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:59.404 13:40:13 thread -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 ************************************ 00:11:59.404 END TEST thread 00:11:59.404 ************************************ 00:11:59.404 13:40:13 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:11:59.404 13:40:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:59.404 13:40:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:59.404 13:40:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 ************************************ 00:11:59.404 START TEST accel 00:11:59.404 ************************************ 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:11:59.404 * Looking for test storage... 00:11:59.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:11:59.404 13:40:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:59.404 13:40:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:59.404 13:40:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:59.404 13:40:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1235334 00:11:59.404 13:40:13 accel -- accel/accel.sh@63 -- # waitforlisten 1235334 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@830 -- # '[' -z 1235334 ']' 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:59.404 13:40:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.404 13:40:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:59.404 13:40:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:59.404 13:40:13 accel -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 13:40:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:59.404 13:40:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:59.404 13:40:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:59.404 13:40:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:59.404 13:40:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:59.404 13:40:13 accel -- accel/accel.sh@41 -- # jq -r . 00:11:59.404 [2024-06-10 13:40:13.849855] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:11:59.404 [2024-06-10 13:40:13.849907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235334 ] 00:11:59.664 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.664 [2024-06-10 13:40:13.955572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.664 [2024-06-10 13:40:14.041586] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.232 13:40:14 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:00.232 13:40:14 accel -- common/autotest_common.sh@863 -- # return 0 00:12:00.232 13:40:14 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:00.232 13:40:14 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:00.232 13:40:14 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:00.232 13:40:14 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:00.232 13:40:14 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:00.492 13:40:14 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:00.492 13:40:14 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.492 13:40:14 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:00.492 13:40:14 accel -- common/autotest_common.sh@10 -- # set +x 00:12:00.492 13:40:14 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.492 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.492 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.492 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # IFS== 00:12:00.493 13:40:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:00.493 13:40:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:00.493 13:40:14 accel -- accel/accel.sh@75 -- # killprocess 1235334 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@949 -- # '[' -z 1235334 ']' 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@953 -- # kill -0 1235334 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@954 -- # uname 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1235334 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1235334' 00:12:00.493 killing process with pid 1235334 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@968 -- # kill 1235334 00:12:00.493 13:40:14 accel -- common/autotest_common.sh@973 -- # wait 1235334 00:12:00.752 13:40:15 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:00.752 13:40:15 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:00.752 13:40:15 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:00.752 13:40:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:00.752 13:40:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:00.752 13:40:15 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:00.752 13:40:15 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:00.752 13:40:15 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:00.752 13:40:15 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:01.011 13:40:15 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:01.011 13:40:15 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:01.011 13:40:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:01.011 13:40:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.011 ************************************ 00:12:01.011 START TEST accel_missing_filename 00:12:01.011 ************************************ 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.011 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:01.011 13:40:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:01.011 [2024-06-10 13:40:15.329800] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:01.011 [2024-06-10 13:40:15.329858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235635 ] 00:12:01.011 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.011 [2024-06-10 13:40:15.449604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.271 [2024-06-10 13:40:15.535727] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.271 [2024-06-10 13:40:15.580367] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.271 [2024-06-10 13:40:15.642896] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:12:01.271 A filename is required. 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:01.271 00:12:01.271 real 0m0.422s 00:12:01.271 user 0m0.286s 00:12:01.271 sys 0m0.173s 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:01.271 13:40:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:01.271 ************************************ 00:12:01.271 END TEST accel_missing_filename 00:12:01.271 ************************************ 00:12:01.530 13:40:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:01.530 13:40:15 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:12:01.530 13:40:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:01.530 13:40:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.530 ************************************ 00:12:01.531 START TEST accel_compress_verify 00:12:01.531 ************************************ 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.531 13:40:15 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:01.531 13:40:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:01.531 [2024-06-10 13:40:15.839301] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:01.531 [2024-06-10 13:40:15.839370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235663 ] 00:12:01.531 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.531 [2024-06-10 13:40:15.946424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.790 [2024-06-10 13:40:16.028776] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.790 [2024-06-10 13:40:16.073263] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.790 [2024-06-10 13:40:16.135465] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:12:01.790 00:12:01.790 Compression does not support the verify option, aborting. 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:01.790 00:12:01.790 real 0m0.408s 00:12:01.790 user 0m0.276s 00:12:01.790 sys 0m0.170s 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:01.790 13:40:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:01.790 ************************************ 00:12:01.790 END TEST accel_compress_verify 00:12:01.790 ************************************ 00:12:01.790 13:40:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:01.790 13:40:16 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:01.790 13:40:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:01.790 13:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.050 ************************************ 00:12:02.050 START TEST accel_wrong_workload 00:12:02.050 ************************************ 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:02.050 13:40:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:02.050 Unsupported workload type: foobar 00:12:02.050 [2024-06-10 13:40:16.328810] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:02.050 accel_perf options: 00:12:02.050 [-h help message] 00:12:02.050 [-q queue depth per core] 00:12:02.050 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:02.050 [-T number of threads per core 00:12:02.050 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:02.050 [-t time in seconds] 00:12:02.050 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:02.050 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:02.050 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:02.050 [-l for compress/decompress workloads, name of uncompressed input file 00:12:02.050 [-S for crc32c workload, use this seed value (default 0) 00:12:02.050 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:02.050 [-f for fill workload, use this BYTE value (default 255) 00:12:02.050 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:02.050 [-y verify result if this switch is on] 00:12:02.050 [-a tasks to allocate per core (default: same value as -q)] 00:12:02.050 Can be used to spread operations across a wider range of memory. 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:02.050 00:12:02.050 real 0m0.038s 00:12:02.050 user 0m0.020s 00:12:02.050 sys 0m0.018s 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:02.050 13:40:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:02.050 ************************************ 00:12:02.050 END TEST accel_wrong_workload 00:12:02.050 ************************************ 00:12:02.050 Error: writing output failed: Broken pipe 00:12:02.050 13:40:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:02.050 13:40:16 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:12:02.050 13:40:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:02.050 13:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.050 ************************************ 00:12:02.050 START TEST accel_negative_buffers 00:12:02.050 ************************************ 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:02.050 13:40:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:02.050 -x option must be non-negative. 00:12:02.050 [2024-06-10 13:40:16.449890] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:02.050 accel_perf options: 00:12:02.050 [-h help message] 00:12:02.050 [-q queue depth per core] 00:12:02.050 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:02.050 [-T number of threads per core 00:12:02.050 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:02.050 [-t time in seconds] 00:12:02.050 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:02.050 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:02.050 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:02.050 [-l for compress/decompress workloads, name of uncompressed input file 00:12:02.050 [-S for crc32c workload, use this seed value (default 0) 00:12:02.050 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:02.050 [-f for fill workload, use this BYTE value (default 255) 00:12:02.050 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:02.050 [-y verify result if this switch is on] 00:12:02.050 [-a tasks to allocate per core (default: same value as -q)] 00:12:02.050 Can be used to spread operations across a wider range of memory. 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:02.050 00:12:02.050 real 0m0.039s 00:12:02.050 user 0m0.023s 00:12:02.050 sys 0m0.016s 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:02.050 13:40:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:02.050 ************************************ 00:12:02.050 END TEST accel_negative_buffers 00:12:02.050 ************************************ 00:12:02.050 Error: writing output failed: Broken pipe 00:12:02.050 13:40:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:02.050 13:40:16 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:02.050 13:40:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:02.050 13:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.310 ************************************ 00:12:02.310 START TEST accel_crc32c 00:12:02.310 ************************************ 00:12:02.310 13:40:16 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:02.310 13:40:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:02.310 [2024-06-10 13:40:16.566884] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:02.310 [2024-06-10 13:40:16.566950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235932 ] 00:12:02.310 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.310 [2024-06-10 13:40:16.687151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.310 [2024-06-10 13:40:16.773051] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.570 13:40:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:03.509 13:40:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:03.509 00:12:03.509 real 0m1.431s 00:12:03.509 user 0m1.267s 00:12:03.509 sys 0m0.177s 00:12:03.509 13:40:17 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:03.509 13:40:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:03.509 ************************************ 00:12:03.509 END TEST accel_crc32c 00:12:03.509 ************************************ 00:12:03.768 13:40:18 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:03.768 13:40:18 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:03.768 13:40:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:03.768 13:40:18 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.768 ************************************ 00:12:03.768 START TEST accel_crc32c_C2 00:12:03.768 ************************************ 00:12:03.768 13:40:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:03.768 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:03.768 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:03.768 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:03.768 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:03.768 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:03.769 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:03.769 [2024-06-10 13:40:18.080260] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:03.769 [2024-06-10 13:40:18.080317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236189 ] 00:12:03.769 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.769 [2024-06-10 13:40:18.200955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.029 [2024-06-10 13:40:18.286480] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.029 13:40:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.409 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:05.410 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:05.410 13:40:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:05.410 00:12:05.410 real 0m1.431s 00:12:05.410 user 0m1.264s 00:12:05.410 sys 0m0.180s 00:12:05.410 13:40:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:05.410 13:40:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:05.410 ************************************ 00:12:05.410 END TEST accel_crc32c_C2 00:12:05.410 ************************************ 00:12:05.410 13:40:19 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:05.410 13:40:19 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:05.410 13:40:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:05.410 13:40:19 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.410 ************************************ 00:12:05.410 START TEST accel_copy 00:12:05.410 ************************************ 00:12:05.410 13:40:19 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:05.410 [2024-06-10 13:40:19.590184] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:05.410 [2024-06-10 13:40:19.590241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236455 ] 00:12:05.410 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.410 [2024-06-10 13:40:19.707524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.410 [2024-06-10 13:40:19.792572] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.410 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.411 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.411 13:40:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.411 13:40:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.411 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.411 13:40:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:06.845 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:06.846 13:40:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:06.846 00:12:06.846 real 0m1.424s 00:12:06.846 user 0m1.260s 00:12:06.846 sys 0m0.176s 00:12:06.846 13:40:20 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:06.846 13:40:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:06.846 ************************************ 00:12:06.846 END TEST accel_copy 00:12:06.846 ************************************ 00:12:06.846 13:40:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:06.846 13:40:21 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:12:06.846 13:40:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:06.846 13:40:21 accel -- common/autotest_common.sh@10 -- # set +x 00:12:06.846 ************************************ 00:12:06.846 START TEST accel_fill 00:12:06.846 ************************************ 00:12:06.846 13:40:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:06.846 13:40:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:06.846 [2024-06-10 13:40:21.101574] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:06.846 [2024-06-10 13:40:21.101661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236721 ] 00:12:06.846 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.846 [2024-06-10 13:40:21.221817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.846 [2024-06-10 13:40:21.303649] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:07.135 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.136 13:40:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:08.073 13:40:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:08.073 00:12:08.073 real 0m1.422s 00:12:08.073 user 0m1.255s 00:12:08.073 sys 0m0.182s 00:12:08.073 13:40:22 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:08.073 13:40:22 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:08.073 ************************************ 00:12:08.073 END TEST accel_fill 00:12:08.073 ************************************ 00:12:08.073 13:40:22 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:08.073 13:40:22 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:08.073 13:40:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:08.073 13:40:22 accel -- common/autotest_common.sh@10 -- # set +x 00:12:08.332 ************************************ 00:12:08.332 START TEST accel_copy_crc32c 00:12:08.332 ************************************ 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:08.332 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:08.333 [2024-06-10 13:40:22.604846] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:08.333 [2024-06-10 13:40:22.604925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236981 ] 00:12:08.333 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.333 [2024-06-10 13:40:22.726402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.593 [2024-06-10 13:40:22.808805] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.593 13:40:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:09.531 00:12:09.531 real 0m1.424s 00:12:09.531 user 0m1.260s 00:12:09.531 sys 0m0.170s 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:09.531 13:40:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:09.531 ************************************ 00:12:09.531 END TEST accel_copy_crc32c 00:12:09.531 ************************************ 00:12:09.791 13:40:24 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:09.791 13:40:24 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:09.791 13:40:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:09.791 13:40:24 accel -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 ************************************ 00:12:09.791 START TEST accel_copy_crc32c_C2 00:12:09.791 ************************************ 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:09.791 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:09.791 [2024-06-10 13:40:24.103932] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:09.791 [2024-06-10 13:40:24.103989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237238 ] 00:12:09.791 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.791 [2024-06-10 13:40:24.223930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.051 [2024-06-10 13:40:24.308801] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.051 13:40:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:11.431 00:12:11.431 real 0m1.425s 00:12:11.431 user 0m1.256s 00:12:11.431 sys 0m0.174s 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:11.431 13:40:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:11.431 ************************************ 00:12:11.431 END TEST accel_copy_crc32c_C2 00:12:11.431 ************************************ 00:12:11.431 13:40:25 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:11.431 13:40:25 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:11.431 13:40:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:11.431 13:40:25 accel -- common/autotest_common.sh@10 -- # set +x 00:12:11.431 ************************************ 00:12:11.431 START TEST accel_dualcast 00:12:11.431 ************************************ 00:12:11.431 13:40:25 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:11.431 [2024-06-10 13:40:25.606745] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:11.431 [2024-06-10 13:40:25.606828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237496 ] 00:12:11.431 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.431 [2024-06-10 13:40:25.727604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.431 [2024-06-10 13:40:25.810023] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.431 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:11.432 13:40:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.810 13:40:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:12.810 13:40:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:12.811 13:40:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.811 00:12:12.811 real 0m1.423s 00:12:12.811 user 0m1.246s 00:12:12.811 sys 0m0.182s 00:12:12.811 13:40:26 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:12.811 13:40:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:12.811 ************************************ 00:12:12.811 END TEST accel_dualcast 00:12:12.811 ************************************ 00:12:12.811 13:40:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:12.811 13:40:27 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:12.811 13:40:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:12.811 13:40:27 accel -- common/autotest_common.sh@10 -- # set +x 00:12:12.811 ************************************ 00:12:12.811 START TEST accel_compare 00:12:12.811 ************************************ 00:12:12.811 13:40:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:12.811 13:40:27 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:12.811 [2024-06-10 13:40:27.105024] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:12.811 [2024-06-10 13:40:27.105080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237748 ] 00:12:12.811 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.811 [2024-06-10 13:40:27.225213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.071 [2024-06-10 13:40:27.309853] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:13.071 13:40:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:14.449 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:14.450 13:40:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:14.450 13:40:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:14.450 00:12:14.450 real 0m1.422s 00:12:14.450 user 0m1.255s 00:12:14.450 sys 0m0.172s 00:12:14.450 13:40:28 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:14.450 13:40:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:14.450 ************************************ 00:12:14.450 END TEST accel_compare 00:12:14.450 ************************************ 00:12:14.450 13:40:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:14.450 13:40:28 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:12:14.450 13:40:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:14.450 13:40:28 accel -- common/autotest_common.sh@10 -- # set +x 00:12:14.450 ************************************ 00:12:14.450 START TEST accel_xor 00:12:14.450 ************************************ 00:12:14.450 13:40:28 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:14.450 [2024-06-10 13:40:28.604093] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:14.450 [2024-06-10 13:40:28.604153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238014 ] 00:12:14.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.450 [2024-06-10 13:40:28.725425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.450 [2024-06-10 13:40:28.808408] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 13:40:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:15.825 13:40:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:15.825 00:12:15.825 real 0m1.421s 00:12:15.825 user 0m1.252s 00:12:15.825 sys 0m0.173s 00:12:15.825 13:40:29 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:15.825 13:40:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:15.825 ************************************ 00:12:15.825 END TEST accel_xor 00:12:15.825 ************************************ 00:12:15.825 13:40:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:15.825 13:40:30 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:15.825 13:40:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:15.825 13:40:30 accel -- common/autotest_common.sh@10 -- # set +x 00:12:15.825 ************************************ 00:12:15.825 START TEST accel_xor 00:12:15.825 ************************************ 00:12:15.825 13:40:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:15.825 13:40:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:15.825 [2024-06-10 13:40:30.109778] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:15.825 [2024-06-10 13:40:30.109836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238301 ] 00:12:15.825 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.825 [2024-06-10 13:40:30.229407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.085 [2024-06-10 13:40:30.315368] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:16.085 13:40:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:17.465 13:40:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:17.465 00:12:17.465 real 0m1.423s 00:12:17.465 user 0m1.255s 00:12:17.465 sys 0m0.174s 00:12:17.465 13:40:31 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:17.465 13:40:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:17.465 ************************************ 00:12:17.465 END TEST accel_xor 00:12:17.465 ************************************ 00:12:17.465 13:40:31 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:17.465 13:40:31 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:12:17.465 13:40:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:17.465 13:40:31 accel -- common/autotest_common.sh@10 -- # set +x 00:12:17.465 ************************************ 00:12:17.465 START TEST accel_dif_verify 00:12:17.465 ************************************ 00:12:17.465 13:40:31 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:17.465 [2024-06-10 13:40:31.609847] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:17.465 [2024-06-10 13:40:31.609912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238584 ] 00:12:17.465 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.465 [2024-06-10 13:40:31.730649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.465 [2024-06-10 13:40:31.812253] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:17.465 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:17.466 13:40:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:18.844 13:40:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:18.844 00:12:18.844 real 0m1.419s 00:12:18.844 user 0m1.254s 00:12:18.844 sys 0m0.171s 00:12:18.844 13:40:33 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:18.844 13:40:33 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:18.844 ************************************ 00:12:18.844 END TEST accel_dif_verify 00:12:18.844 ************************************ 00:12:18.844 13:40:33 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:18.844 13:40:33 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:12:18.844 13:40:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:18.844 13:40:33 accel -- common/autotest_common.sh@10 -- # set +x 00:12:18.844 ************************************ 00:12:18.844 START TEST accel_dif_generate 00:12:18.844 ************************************ 00:12:18.844 13:40:33 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:18.844 13:40:33 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:18.845 13:40:33 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:18.845 13:40:33 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:18.845 [2024-06-10 13:40:33.105447] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:18.845 [2024-06-10 13:40:33.105512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238871 ] 00:12:18.845 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.845 [2024-06-10 13:40:33.225174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.845 [2024-06-10 13:40:33.308851] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:19.104 13:40:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:19.105 13:40:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:19.105 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:19.105 13:40:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:20.041 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:20.042 13:40:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:20.042 00:12:20.042 real 0m1.419s 00:12:20.042 user 0m1.258s 00:12:20.042 sys 0m0.167s 00:12:20.042 13:40:34 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:20.042 13:40:34 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:20.042 ************************************ 00:12:20.042 END TEST accel_dif_generate 00:12:20.042 ************************************ 00:12:20.301 13:40:34 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:20.301 13:40:34 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:12:20.301 13:40:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:20.301 13:40:34 accel -- common/autotest_common.sh@10 -- # set +x 00:12:20.301 ************************************ 00:12:20.301 START TEST accel_dif_generate_copy 00:12:20.301 ************************************ 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:20.301 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:20.301 [2024-06-10 13:40:34.601734] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:20.301 [2024-06-10 13:40:34.601805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239156 ] 00:12:20.301 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.301 [2024-06-10 13:40:34.726233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.562 [2024-06-10 13:40:34.809093] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:20.562 13:40:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:21.940 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:21.941 00:12:21.941 real 0m1.425s 00:12:21.941 user 0m1.262s 00:12:21.941 sys 0m0.168s 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:21.941 13:40:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:21.941 ************************************ 00:12:21.941 END TEST accel_dif_generate_copy 00:12:21.941 ************************************ 00:12:21.941 13:40:36 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:21.941 13:40:36 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:21.941 13:40:36 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:12:21.941 13:40:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:21.941 13:40:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:21.941 ************************************ 00:12:21.941 START TEST accel_comp 00:12:21.941 ************************************ 00:12:21.941 13:40:36 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:21.941 [2024-06-10 13:40:36.103331] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:21.941 [2024-06-10 13:40:36.103390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239443 ] 00:12:21.941 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.941 [2024-06-10 13:40:36.221379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.941 [2024-06-10 13:40:36.302937] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:21.941 13:40:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:23.320 13:40:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:23.320 00:12:23.320 real 0m1.420s 00:12:23.320 user 0m1.260s 00:12:23.320 sys 0m0.165s 00:12:23.320 13:40:37 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:23.320 13:40:37 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:23.320 ************************************ 00:12:23.320 END TEST accel_comp 00:12:23.320 ************************************ 00:12:23.320 13:40:37 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:23.320 13:40:37 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:12:23.320 13:40:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:23.320 13:40:37 accel -- common/autotest_common.sh@10 -- # set +x 00:12:23.320 ************************************ 00:12:23.320 START TEST accel_decomp 00:12:23.320 ************************************ 00:12:23.320 13:40:37 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:23.320 13:40:37 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:23.320 [2024-06-10 13:40:37.597176] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:23.320 [2024-06-10 13:40:37.597232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239722 ] 00:12:23.320 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.320 [2024-06-10 13:40:37.717436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.579 [2024-06-10 13:40:37.799391] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.579 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.579 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:23.580 13:40:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:24.959 13:40:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:24.959 00:12:24.959 real 0m1.424s 00:12:24.959 user 0m1.240s 00:12:24.959 sys 0m0.189s 00:12:24.960 13:40:38 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:24.960 13:40:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:24.960 ************************************ 00:12:24.960 END TEST accel_decomp 00:12:24.960 ************************************ 00:12:24.960 13:40:39 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:24.960 13:40:39 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:12:24.960 13:40:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:24.960 13:40:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:24.960 ************************************ 00:12:24.960 START TEST accel_decomp_full 00:12:24.960 ************************************ 00:12:24.960 13:40:39 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:24.960 [2024-06-10 13:40:39.088395] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:24.960 [2024-06-10 13:40:39.088467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240011 ] 00:12:24.960 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.960 [2024-06-10 13:40:39.210971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.960 [2024-06-10 13:40:39.292528] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:24.960 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:24.961 13:40:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:26.351 13:40:40 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.351 00:12:26.351 real 0m1.435s 00:12:26.351 user 0m1.255s 00:12:26.351 sys 0m0.181s 00:12:26.351 13:40:40 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:26.351 13:40:40 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:12:26.352 ************************************ 00:12:26.352 END TEST accel_decomp_full 00:12:26.352 ************************************ 00:12:26.352 13:40:40 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:12:26.352 13:40:40 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:12:26.352 13:40:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:26.352 13:40:40 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.352 ************************************ 00:12:26.352 START TEST accel_decomp_mcore 00:12:26.352 ************************************ 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:26.352 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:26.352 [2024-06-10 13:40:40.596493] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:26.352 [2024-06-10 13:40:40.596568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240293 ] 00:12:26.352 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.352 [2024-06-10 13:40:40.717408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.352 [2024-06-10 13:40:40.803175] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.352 [2024-06-10 13:40:40.803269] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.352 [2024-06-10 13:40:40.803384] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.352 [2024-06-10 13:40:40.803385] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.612 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:26.613 13:40:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.550 13:40:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.550 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:27.551 00:12:27.551 real 0m1.441s 00:12:27.551 user 0m4.616s 00:12:27.551 sys 0m0.188s 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:27.551 13:40:42 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:27.551 ************************************ 00:12:27.551 END TEST accel_decomp_mcore 00:12:27.551 ************************************ 00:12:27.811 13:40:42 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.811 13:40:42 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:12:27.811 13:40:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:27.811 13:40:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 ************************************ 00:12:27.811 START TEST accel_decomp_full_mcore 00:12:27.811 ************************************ 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:27.811 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:27.811 [2024-06-10 13:40:42.118857] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:27.811 [2024-06-10 13:40:42.118925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240583 ] 00:12:27.811 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.811 [2024-06-10 13:40:42.241531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.072 [2024-06-10 13:40:42.328682] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.072 [2024-06-10 13:40:42.328775] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.072 [2024-06-10 13:40:42.328889] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.072 [2024-06-10 13:40:42.328890] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.072 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:28.073 13:40:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:29.453 00:12:29.453 real 0m1.457s 00:12:29.453 user 0m4.679s 00:12:29.453 sys 0m0.179s 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:29.453 13:40:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:29.453 ************************************ 00:12:29.453 END TEST accel_decomp_full_mcore 00:12:29.453 ************************************ 00:12:29.453 13:40:43 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:29.453 13:40:43 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:12:29.453 13:40:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:29.453 13:40:43 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.453 ************************************ 00:12:29.453 START TEST accel_decomp_mthread 00:12:29.453 ************************************ 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:29.453 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:29.453 [2024-06-10 13:40:43.659496] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:29.453 [2024-06-10 13:40:43.659571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240871 ] 00:12:29.453 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.453 [2024-06-10 13:40:43.780308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.453 [2024-06-10 13:40:43.861441] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.454 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:29.713 13:40:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:30.651 00:12:30.651 real 0m1.433s 00:12:30.651 user 0m1.267s 00:12:30.651 sys 0m0.180s 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:30.651 13:40:45 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:30.651 ************************************ 00:12:30.651 END TEST accel_decomp_mthread 00:12:30.651 ************************************ 00:12:30.651 13:40:45 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:30.651 13:40:45 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:12:30.651 13:40:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:30.651 13:40:45 accel -- common/autotest_common.sh@10 -- # set +x 00:12:30.910 ************************************ 00:12:30.910 START TEST accel_decomp_full_mthread 00:12:30.910 ************************************ 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:30.910 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:30.910 [2024-06-10 13:40:45.170448] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:30.910 [2024-06-10 13:40:45.170502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241161 ] 00:12:30.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.910 [2024-06-10 13:40:45.290010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.910 [2024-06-10 13:40:45.371719] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:31.170 13:40:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:32.186 00:12:32.186 real 0m1.457s 00:12:32.186 user 0m1.301s 00:12:32.186 sys 0m0.170s 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:32.186 13:40:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:32.186 ************************************ 00:12:32.186 END TEST accel_decomp_full_mthread 00:12:32.186 ************************************ 00:12:32.186 13:40:46 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:32.186 13:40:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:32.186 13:40:46 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:12:32.186 13:40:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:12:32.186 13:40:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:32.186 13:40:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:32.186 13:40:46 accel -- common/autotest_common.sh@10 -- # set +x 00:12:32.186 13:40:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:32.186 13:40:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:32.186 13:40:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:32.186 13:40:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:32.186 13:40:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:32.186 13:40:46 accel -- accel/accel.sh@41 -- # jq -r . 00:12:32.446 ************************************ 00:12:32.446 START TEST accel_dif_functional_tests 00:12:32.446 ************************************ 00:12:32.446 13:40:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:32.446 [2024-06-10 13:40:46.721286] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:32.446 [2024-06-10 13:40:46.721343] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241445 ] 00:12:32.446 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.446 [2024-06-10 13:40:46.840323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.705 [2024-06-10 13:40:46.923434] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.705 [2024-06-10 13:40:46.923527] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.705 [2024-06-10 13:40:46.923532] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.705 00:12:32.705 00:12:32.705 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.705 http://cunit.sourceforge.net/ 00:12:32.705 00:12:32.705 00:12:32.705 Suite: accel_dif 00:12:32.705 Test: verify: DIF generated, GUARD check ...passed 00:12:32.705 Test: verify: DIF generated, APPTAG check ...passed 00:12:32.705 Test: verify: DIF generated, REFTAG check ...passed 00:12:32.705 Test: verify: DIF not generated, GUARD check ...[2024-06-10 13:40:46.997847] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:32.705 passed 00:12:32.705 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 13:40:46.997912] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:32.705 passed 00:12:32.705 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 13:40:46.997945] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:32.705 passed 00:12:32.705 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:32.705 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 13:40:46.998010] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:32.705 passed 00:12:32.705 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:32.705 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:32.705 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:32.705 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 13:40:46.998147] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:32.705 passed 00:12:32.705 Test: verify copy: DIF generated, GUARD check ...passed 00:12:32.705 Test: verify copy: DIF generated, APPTAG check ...passed 00:12:32.705 Test: verify copy: DIF generated, REFTAG check ...passed 00:12:32.705 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 13:40:46.998299] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:32.705 passed 00:12:32.705 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 13:40:46.998333] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:32.705 passed 00:12:32.705 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 13:40:46.998365] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:32.705 passed 00:12:32.705 Test: generate copy: DIF generated, GUARD check ...passed 00:12:32.705 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:32.705 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:32.705 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:32.705 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:32.705 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:32.705 Test: generate copy: iovecs-len validate ...[2024-06-10 13:40:46.998602] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:32.705 passed 00:12:32.705 Test: generate copy: buffer alignment validate ...passed 00:12:32.705 00:12:32.705 Run Summary: Type Total Ran Passed Failed Inactive 00:12:32.705 suites 1 1 n/a 0 0 00:12:32.705 tests 26 26 26 0 0 00:12:32.705 asserts 115 115 115 0 n/a 00:12:32.705 00:12:32.705 Elapsed time = 0.002 seconds 00:12:32.965 00:12:32.965 real 0m0.497s 00:12:32.965 user 0m0.637s 00:12:32.965 sys 0m0.193s 00:12:32.965 13:40:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:32.965 13:40:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:12:32.965 ************************************ 00:12:32.965 END TEST accel_dif_functional_tests 00:12:32.965 ************************************ 00:12:32.965 00:12:32.965 real 0m33.533s 00:12:32.965 user 0m35.564s 00:12:32.965 sys 0m6.110s 00:12:32.965 13:40:47 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:32.965 13:40:47 accel -- common/autotest_common.sh@10 -- # set +x 00:12:32.965 ************************************ 00:12:32.965 END TEST accel 00:12:32.965 ************************************ 00:12:32.965 13:40:47 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:32.965 13:40:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:32.965 13:40:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:32.965 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:12:32.965 ************************************ 00:12:32.965 START TEST accel_rpc 00:12:32.965 ************************************ 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:32.965 * Looking for test storage... 00:12:32.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:12:32.965 13:40:47 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:32.965 13:40:47 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1241600 00:12:32.965 13:40:47 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1241600 00:12:32.965 13:40:47 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1241600 ']' 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:32.965 13:40:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 [2024-06-10 13:40:47.479593] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:33.223 [2024-06-10 13:40:47.479664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241600 ] 00:12:33.223 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.223 [2024-06-10 13:40:47.601017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.223 [2024-06-10 13:40:47.685853] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.161 13:40:48 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:34.161 13:40:48 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:12:34.161 13:40:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:34.161 13:40:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:34.161 13:40:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:34.161 13:40:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:34.161 13:40:48 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:34.161 13:40:48 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:34.161 13:40:48 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:34.161 13:40:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 ************************************ 00:12:34.161 START TEST accel_assign_opcode 00:12:34.161 ************************************ 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 [2024-06-10 13:40:48.416119] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 [2024-06-10 13:40:48.424128] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:12:34.161 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:34.420 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.420 software 00:12:34.420 00:12:34.420 real 0m0.249s 00:12:34.420 user 0m0.050s 00:12:34.420 sys 0m0.012s 00:12:34.420 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:34.420 13:40:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:34.420 ************************************ 00:12:34.420 END TEST accel_assign_opcode 00:12:34.420 ************************************ 00:12:34.420 13:40:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1241600 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1241600 ']' 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1241600 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241600 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241600' 00:12:34.420 killing process with pid 1241600 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@968 -- # kill 1241600 00:12:34.420 13:40:48 accel_rpc -- common/autotest_common.sh@973 -- # wait 1241600 00:12:34.679 00:12:34.679 real 0m1.785s 00:12:34.679 user 0m1.837s 00:12:34.679 sys 0m0.594s 00:12:34.679 13:40:49 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:34.679 13:40:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.679 ************************************ 00:12:34.679 END TEST accel_rpc 00:12:34.679 ************************************ 00:12:34.679 13:40:49 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:12:34.679 13:40:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:34.679 13:40:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:34.679 13:40:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 ************************************ 00:12:34.938 START TEST app_cmdline 00:12:34.938 ************************************ 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:12:34.938 * Looking for test storage... 00:12:34.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:34.938 13:40:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:34.938 13:40:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1242089 00:12:34.938 13:40:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:34.938 13:40:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1242089 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1242089 ']' 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:34.938 13:40:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 [2024-06-10 13:40:49.329780] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:12:34.938 [2024-06-10 13:40:49.329853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242089 ] 00:12:34.938 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.197 [2024-06-10 13:40:49.448899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.197 [2024-06-10 13:40:49.533682] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.765 13:40:50 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:35.765 13:40:50 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:12:36.024 { 00:12:36.024 "version": "SPDK v24.09-pre git sha1 c5b9f923d", 00:12:36.024 "fields": { 00:12:36.024 "major": 24, 00:12:36.024 "minor": 9, 00:12:36.024 "patch": 0, 00:12:36.024 "suffix": "-pre", 00:12:36.024 "commit": "c5b9f923d" 00:12:36.024 } 00:12:36.024 } 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:36.024 13:40:50 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:36.024 13:40:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:36.024 13:40:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:36.024 13:40:50 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:36.283 13:40:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:36.283 13:40:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:36.283 13:40:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:36.283 request: 00:12:36.283 { 00:12:36.283 "method": "env_dpdk_get_mem_stats", 00:12:36.283 "req_id": 1 00:12:36.283 } 00:12:36.283 Got JSON-RPC error response 00:12:36.283 response: 00:12:36.283 { 00:12:36.283 "code": -32601, 00:12:36.283 "message": "Method not found" 00:12:36.283 } 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:36.283 13:40:50 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:36.284 13:40:50 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:36.543 13:40:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1242089 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1242089 ']' 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1242089 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1242089 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1242089' 00:12:36.543 killing process with pid 1242089 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@968 -- # kill 1242089 00:12:36.543 13:40:50 app_cmdline -- common/autotest_common.sh@973 -- # wait 1242089 00:12:36.803 00:12:36.803 real 0m1.979s 00:12:36.803 user 0m2.404s 00:12:36.803 sys 0m0.588s 00:12:36.803 13:40:51 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:36.803 13:40:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:36.804 ************************************ 00:12:36.804 END TEST app_cmdline 00:12:36.804 ************************************ 00:12:36.804 13:40:51 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:12:36.804 13:40:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:36.804 13:40:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:36.804 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:36.804 ************************************ 00:12:36.804 START TEST version 00:12:36.804 ************************************ 00:12:36.804 13:40:51 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:12:37.064 * Looking for test storage... 00:12:37.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:37.064 13:40:51 version -- app/version.sh@17 -- # get_header_version major 00:12:37.064 13:40:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # cut -f2 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # tr -d '"' 00:12:37.064 13:40:51 version -- app/version.sh@17 -- # major=24 00:12:37.064 13:40:51 version -- app/version.sh@18 -- # get_header_version minor 00:12:37.064 13:40:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # cut -f2 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # tr -d '"' 00:12:37.064 13:40:51 version -- app/version.sh@18 -- # minor=9 00:12:37.064 13:40:51 version -- app/version.sh@19 -- # get_header_version patch 00:12:37.064 13:40:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # cut -f2 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # tr -d '"' 00:12:37.064 13:40:51 version -- app/version.sh@19 -- # patch=0 00:12:37.064 13:40:51 version -- app/version.sh@20 -- # get_header_version suffix 00:12:37.064 13:40:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # cut -f2 00:12:37.064 13:40:51 version -- app/version.sh@14 -- # tr -d '"' 00:12:37.064 13:40:51 version -- app/version.sh@20 -- # suffix=-pre 00:12:37.064 13:40:51 version -- app/version.sh@22 -- # version=24.9 00:12:37.064 13:40:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:37.064 13:40:51 version -- app/version.sh@28 -- # version=24.9rc0 00:12:37.064 13:40:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:37.064 13:40:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:37.064 13:40:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:12:37.064 13:40:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:12:37.064 00:12:37.064 real 0m0.186s 00:12:37.064 user 0m0.092s 00:12:37.064 sys 0m0.143s 00:12:37.064 13:40:51 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:37.064 13:40:51 version -- common/autotest_common.sh@10 -- # set +x 00:12:37.064 ************************************ 00:12:37.064 END TEST version 00:12:37.064 ************************************ 00:12:37.064 13:40:51 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@198 -- # uname -s 00:12:37.064 13:40:51 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:12:37.064 13:40:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:12:37.064 13:40:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:12:37.064 13:40:51 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:12:37.064 13:40:51 -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:37.064 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:37.064 13:40:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:12:37.064 13:40:51 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:12:37.064 13:40:51 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:37.064 13:40:51 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:37.064 13:40:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:37.064 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:12:37.324 ************************************ 00:12:37.324 START TEST nvmf_tcp 00:12:37.324 ************************************ 00:12:37.324 13:40:51 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:37.324 * Looking for test storage... 00:12:37.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.324 13:40:51 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.324 13:40:51 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.324 13:40:51 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.324 13:40:51 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.324 13:40:51 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.324 13:40:51 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.324 13:40:51 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:12:37.324 13:40:51 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:12:37.324 13:40:51 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:37.324 13:40:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:12:37.324 13:40:51 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:37.324 13:40:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:37.324 13:40:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:37.324 13:40:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.324 ************************************ 00:12:37.324 START TEST nvmf_example 00:12:37.324 ************************************ 00:12:37.324 13:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:37.584 * Looking for test storage... 00:12:37.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.584 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.585 13:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:45.721 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:45.721 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.721 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:45.722 Found net devices under 0000:af:00.0: cvl_0_0 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:45.722 Found net devices under 0000:af:00.1: cvl_0_1 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.722 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.981 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:12:46.240 00:12:46.240 --- 10.0.0.2 ping statistics --- 00:12:46.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.240 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:12:46.240 00:12:46.240 --- 10.0.0.1 ping statistics --- 00:12:46.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.240 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1246652 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1246652 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1246652 ']' 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:46.240 13:41:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:46.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:47.178 13:41:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:47.178 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.390 Initializing NVMe Controllers 00:12:59.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:59.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:59.390 Initialization complete. Launching workers. 00:12:59.390 ======================================================== 00:12:59.390 Latency(us) 00:12:59.390 Device Information : IOPS MiB/s Average min max 00:12:59.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15301.14 59.77 4182.39 807.23 15436.18 00:12:59.390 ======================================================== 00:12:59.390 Total : 15301.14 59.77 4182.39 807.23 15436.18 00:12:59.390 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.390 rmmod nvme_tcp 00:12:59.390 rmmod nvme_fabrics 00:12:59.390 rmmod nvme_keyring 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1246652 ']' 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1246652 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1246652 ']' 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1246652 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1246652 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1246652' 00:12:59.390 killing process with pid 1246652 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 1246652 00:12:59.390 13:41:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 1246652 00:12:59.390 nvmf threads initialize successfully 00:12:59.390 bdev subsystem init successfully 00:12:59.390 created a nvmf target service 00:12:59.390 create targets's poll groups done 00:12:59.390 all subsystems of target started 00:12:59.390 nvmf target is running 00:12:59.390 all subsystems of target stopped 00:12:59.390 destroy targets's poll groups done 00:12:59.390 destroyed the nvmf target service 00:12:59.390 bdev subsystem finish successfully 00:12:59.390 nvmf threads destroy successfully 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.390 13:41:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.958 13:41:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.959 13:41:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:59.959 13:41:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:59.959 13:41:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:59.959 00:12:59.959 real 0m22.558s 00:12:59.959 user 0m46.305s 00:12:59.959 sys 0m8.752s 00:12:59.959 13:41:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:59.959 13:41:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:59.959 ************************************ 00:12:59.959 END TEST nvmf_example 00:12:59.959 ************************************ 00:12:59.959 13:41:14 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:59.959 13:41:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:59.959 13:41:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:59.959 13:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.959 ************************************ 00:12:59.959 START TEST nvmf_filesystem 00:12:59.959 ************************************ 00:12:59.959 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:00.221 * Looking for test storage... 00:13:00.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:00.221 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:00.221 #define SPDK_CONFIG_H 00:13:00.222 #define SPDK_CONFIG_APPS 1 00:13:00.222 #define SPDK_CONFIG_ARCH native 00:13:00.222 #undef SPDK_CONFIG_ASAN 00:13:00.222 #undef SPDK_CONFIG_AVAHI 00:13:00.222 #undef SPDK_CONFIG_CET 00:13:00.222 #define SPDK_CONFIG_COVERAGE 1 00:13:00.222 #define SPDK_CONFIG_CROSS_PREFIX 00:13:00.222 #undef SPDK_CONFIG_CRYPTO 00:13:00.222 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:00.222 #undef SPDK_CONFIG_CUSTOMOCF 00:13:00.222 #undef SPDK_CONFIG_DAOS 00:13:00.222 #define SPDK_CONFIG_DAOS_DIR 00:13:00.222 #define SPDK_CONFIG_DEBUG 1 00:13:00.222 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:00.222 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:00.222 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:00.222 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:00.222 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:00.222 #undef SPDK_CONFIG_DPDK_UADK 00:13:00.222 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:00.222 #define SPDK_CONFIG_EXAMPLES 1 00:13:00.222 #undef SPDK_CONFIG_FC 00:13:00.222 #define SPDK_CONFIG_FC_PATH 00:13:00.222 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:00.222 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:00.222 #undef SPDK_CONFIG_FUSE 00:13:00.222 #undef SPDK_CONFIG_FUZZER 00:13:00.222 #define SPDK_CONFIG_FUZZER_LIB 00:13:00.222 #undef SPDK_CONFIG_GOLANG 00:13:00.222 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:00.222 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:00.222 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:00.222 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:00.222 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:00.222 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:00.222 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:00.222 #define SPDK_CONFIG_IDXD 1 00:13:00.222 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:00.222 #undef SPDK_CONFIG_IPSEC_MB 00:13:00.222 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:00.222 #define SPDK_CONFIG_ISAL 1 00:13:00.222 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:00.222 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:00.222 #define SPDK_CONFIG_LIBDIR 00:13:00.222 #undef SPDK_CONFIG_LTO 00:13:00.222 #define SPDK_CONFIG_MAX_LCORES 00:13:00.222 #define SPDK_CONFIG_NVME_CUSE 1 00:13:00.222 #undef SPDK_CONFIG_OCF 00:13:00.222 #define SPDK_CONFIG_OCF_PATH 00:13:00.222 #define SPDK_CONFIG_OPENSSL_PATH 00:13:00.222 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:00.222 #define SPDK_CONFIG_PGO_DIR 00:13:00.222 #undef SPDK_CONFIG_PGO_USE 00:13:00.222 #define SPDK_CONFIG_PREFIX /usr/local 00:13:00.222 #undef SPDK_CONFIG_RAID5F 00:13:00.222 #undef SPDK_CONFIG_RBD 00:13:00.222 #define SPDK_CONFIG_RDMA 1 00:13:00.222 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:00.222 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:00.222 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:00.222 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:00.222 #define SPDK_CONFIG_SHARED 1 00:13:00.222 #undef SPDK_CONFIG_SMA 00:13:00.222 #define SPDK_CONFIG_TESTS 1 00:13:00.222 #undef SPDK_CONFIG_TSAN 00:13:00.222 #define SPDK_CONFIG_UBLK 1 00:13:00.222 #define SPDK_CONFIG_UBSAN 1 00:13:00.222 #undef SPDK_CONFIG_UNIT_TESTS 00:13:00.222 #undef SPDK_CONFIG_URING 00:13:00.222 #define SPDK_CONFIG_URING_PATH 00:13:00.222 #undef SPDK_CONFIG_URING_ZNS 00:13:00.222 #undef SPDK_CONFIG_USDT 00:13:00.222 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:00.222 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:00.222 #define SPDK_CONFIG_VFIO_USER 1 00:13:00.222 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:00.222 #define SPDK_CONFIG_VHOST 1 00:13:00.222 #define SPDK_CONFIG_VIRTIO 1 00:13:00.222 #undef SPDK_CONFIG_VTUNE 00:13:00.222 #define SPDK_CONFIG_VTUNE_DIR 00:13:00.222 #define SPDK_CONFIG_WERROR 1 00:13:00.222 #define SPDK_CONFIG_WPDK_DIR 00:13:00.222 #undef SPDK_CONFIG_XNVME 00:13:00.222 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:00.222 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:00.223 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1249157 ]] 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1249157 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.zBrsHP 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zBrsHP/tests/target /tmp/spdk.zBrsHP 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956952576 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327477248 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=50888073216 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742280704 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10854207488 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30866427904 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12338741248 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9715712 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30869372928 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871142400 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1769472 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:13:00.224 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:13:00.225 * Looking for test storage... 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=50888073216 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13068800000 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.225 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.226 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.226 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.226 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.485 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.485 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.485 13:41:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.485 13:41:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:08.608 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.608 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:08.609 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:08.609 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:08.609 Found net devices under 0000:af:00.0: cvl_0_0 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:08.609 Found net devices under 0000:af:00.1: cvl_0_1 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.609 13:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.609 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:13:08.869 00:13:08.869 --- 10.0.0.2 ping statistics --- 00:13:08.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.869 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:13:08.869 00:13:08.869 --- 10.0.0.1 ping statistics --- 00:13:08.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.869 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:08.869 ************************************ 00:13:08.869 START TEST nvmf_filesystem_no_in_capsule 00:13:08.869 ************************************ 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1253044 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1253044 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1253044 ']' 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:08.869 13:41:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.869 [2024-06-10 13:41:23.261644] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:13:08.869 [2024-06-10 13:41:23.261707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.869 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.128 [2024-06-10 13:41:23.390951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.129 [2024-06-10 13:41:23.481089] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.129 [2024-06-10 13:41:23.481136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.129 [2024-06-10 13:41:23.481149] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.129 [2024-06-10 13:41:23.481161] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.129 [2024-06-10 13:41:23.481173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.129 [2024-06-10 13:41:23.481227] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.129 [2024-06-10 13:41:23.481331] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.129 [2024-06-10 13:41:23.481443] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.129 [2024-06-10 13:41:23.481444] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.066 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 [2024-06-10 13:41:24.222885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 [2024-06-10 13:41:24.378253] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:13:10.067 { 00:13:10.067 "name": "Malloc1", 00:13:10.067 "aliases": [ 00:13:10.067 "1defaf16-c690-45bb-818d-7013931a3781" 00:13:10.067 ], 00:13:10.067 "product_name": "Malloc disk", 00:13:10.067 "block_size": 512, 00:13:10.067 "num_blocks": 1048576, 00:13:10.067 "uuid": "1defaf16-c690-45bb-818d-7013931a3781", 00:13:10.067 "assigned_rate_limits": { 00:13:10.067 "rw_ios_per_sec": 0, 00:13:10.067 "rw_mbytes_per_sec": 0, 00:13:10.067 "r_mbytes_per_sec": 0, 00:13:10.067 "w_mbytes_per_sec": 0 00:13:10.067 }, 00:13:10.067 "claimed": true, 00:13:10.067 "claim_type": "exclusive_write", 00:13:10.067 "zoned": false, 00:13:10.067 "supported_io_types": { 00:13:10.067 "read": true, 00:13:10.067 "write": true, 00:13:10.067 "unmap": true, 00:13:10.067 "write_zeroes": true, 00:13:10.067 "flush": true, 00:13:10.067 "reset": true, 00:13:10.067 "compare": false, 00:13:10.067 "compare_and_write": false, 00:13:10.067 "abort": true, 00:13:10.067 "nvme_admin": false, 00:13:10.067 "nvme_io": false 00:13:10.067 }, 00:13:10.067 "memory_domains": [ 00:13:10.067 { 00:13:10.067 "dma_device_id": "system", 00:13:10.067 "dma_device_type": 1 00:13:10.067 }, 00:13:10.067 { 00:13:10.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.067 "dma_device_type": 2 00:13:10.067 } 00:13:10.067 ], 00:13:10.067 "driver_specific": {} 00:13:10.067 } 00:13:10.067 ]' 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:10.067 13:41:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.446 13:41:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.446 13:41:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:13:11.446 13:41:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.446 13:41:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:11.446 13:41:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:13:13.351 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:13.351 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:13.351 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:13.715 13:41:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:13.715 13:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:14.664 13:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.600 ************************************ 00:13:15.600 START TEST filesystem_ext4 00:13:15.600 ************************************ 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:15.600 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:13:15.601 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:13:15.601 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:13:15.601 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:13:15.601 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:13:15.601 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:13:15.601 13:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:15.601 mke2fs 1.46.5 (30-Dec-2021) 00:13:15.601 Discarding device blocks: 0/522240 done 00:13:15.601 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:15.601 Filesystem UUID: 5e14c7cf-f7cc-4e1d-8a6e-32959b4c01b1 00:13:15.601 Superblock backups stored on blocks: 00:13:15.601 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:15.601 00:13:15.601 Allocating group tables: 0/64 done 00:13:15.601 Writing inode tables: 0/64 done 00:13:15.859 Creating journal (8192 blocks): done 00:13:15.859 Writing superblocks and filesystem accounting information: 0/64 done 00:13:15.859 00:13:15.859 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:13:15.860 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:15.860 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1253044 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.120 00:13:16.120 real 0m0.509s 00:13:16.120 user 0m0.037s 00:13:16.120 sys 0m0.071s 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:16.120 ************************************ 00:13:16.120 END TEST filesystem_ext4 00:13:16.120 ************************************ 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.120 ************************************ 00:13:16.120 START TEST filesystem_btrfs 00:13:16.120 ************************************ 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:13:16.120 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:16.688 btrfs-progs v6.6.2 00:13:16.688 See https://btrfs.readthedocs.io for more information. 00:13:16.689 00:13:16.689 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:16.689 NOTE: several default settings have changed in version 5.15, please make sure 00:13:16.689 this does not affect your deployments: 00:13:16.689 - DUP for metadata (-m dup) 00:13:16.689 - enabled no-holes (-O no-holes) 00:13:16.689 - enabled free-space-tree (-R free-space-tree) 00:13:16.689 00:13:16.689 Label: (null) 00:13:16.689 UUID: 62307b3b-e8ac-4dd9-ae41-59f9996c9f00 00:13:16.689 Node size: 16384 00:13:16.689 Sector size: 4096 00:13:16.689 Filesystem size: 510.00MiB 00:13:16.689 Block group profiles: 00:13:16.689 Data: single 8.00MiB 00:13:16.689 Metadata: DUP 32.00MiB 00:13:16.689 System: DUP 8.00MiB 00:13:16.689 SSD detected: yes 00:13:16.689 Zoned device: no 00:13:16.689 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:16.689 Runtime features: free-space-tree 00:13:16.689 Checksum: crc32c 00:13:16.689 Number of devices: 1 00:13:16.689 Devices: 00:13:16.689 ID SIZE PATH 00:13:16.689 1 510.00MiB /dev/nvme0n1p1 00:13:16.689 00:13:16.689 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:13:16.689 13:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1253044 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.948 00:13:16.948 real 0m0.844s 00:13:16.948 user 0m0.027s 00:13:16.948 sys 0m0.147s 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:16.948 ************************************ 00:13:16.948 END TEST filesystem_btrfs 00:13:16.948 ************************************ 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:16.948 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.208 ************************************ 00:13:17.208 START TEST filesystem_xfs 00:13:17.208 ************************************ 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:13:17.208 13:41:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:17.208 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:17.208 = sectsz=512 attr=2, projid32bit=1 00:13:17.208 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:17.208 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:17.208 data = bsize=4096 blocks=130560, imaxpct=25 00:13:17.208 = sunit=0 swidth=0 blks 00:13:17.208 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:17.208 log =internal log bsize=4096 blocks=16384, version=2 00:13:17.208 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:17.208 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:18.146 Discarding blocks...Done. 00:13:18.146 13:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:13:18.146 13:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:20.682 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:20.682 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:20.682 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1253044 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:20.941 00:13:20.941 real 0m3.766s 00:13:20.941 user 0m0.026s 00:13:20.941 sys 0m0.088s 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:20.941 ************************************ 00:13:20.941 END TEST filesystem_xfs 00:13:20.941 ************************************ 00:13:20.941 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:21.200 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:21.200 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.459 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1253044 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1253044 ']' 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1253044 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1253044 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1253044' 00:13:21.460 killing process with pid 1253044 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 1253044 00:13:21.460 13:41:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 1253044 00:13:21.720 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:21.720 00:13:21.720 real 0m12.978s 00:13:21.720 user 0m50.393s 00:13:21.720 sys 0m1.858s 00:13:21.720 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:21.720 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.720 ************************************ 00:13:21.720 END TEST nvmf_filesystem_no_in_capsule 00:13:21.720 ************************************ 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:21.979 ************************************ 00:13:21.979 START TEST nvmf_filesystem_in_capsule 00:13:21.979 ************************************ 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1255494 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1255494 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1255494 ']' 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:21.979 13:41:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.979 [2024-06-10 13:41:36.331456] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:13:21.979 [2024-06-10 13:41:36.331514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.979 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.239 [2024-06-10 13:41:36.461067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.239 [2024-06-10 13:41:36.546546] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.239 [2024-06-10 13:41:36.546600] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.239 [2024-06-10 13:41:36.546614] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.239 [2024-06-10 13:41:36.546626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.239 [2024-06-10 13:41:36.546636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.239 [2024-06-10 13:41:36.546695] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.239 [2024-06-10 13:41:36.546793] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.239 [2024-06-10 13:41:36.546923] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.239 [2024-06-10 13:41:36.546923] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.807 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:22.807 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:13:22.807 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.807 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:22.807 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 [2024-06-10 13:41:37.293114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 [2024-06-10 13:41:37.442974] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.067 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:13:23.067 { 00:13:23.067 "name": "Malloc1", 00:13:23.067 "aliases": [ 00:13:23.067 "293b9e96-0936-42b5-91c4-992229bba683" 00:13:23.067 ], 00:13:23.067 "product_name": "Malloc disk", 00:13:23.067 "block_size": 512, 00:13:23.067 "num_blocks": 1048576, 00:13:23.067 "uuid": "293b9e96-0936-42b5-91c4-992229bba683", 00:13:23.067 "assigned_rate_limits": { 00:13:23.067 "rw_ios_per_sec": 0, 00:13:23.067 "rw_mbytes_per_sec": 0, 00:13:23.067 "r_mbytes_per_sec": 0, 00:13:23.067 "w_mbytes_per_sec": 0 00:13:23.067 }, 00:13:23.067 "claimed": true, 00:13:23.068 "claim_type": "exclusive_write", 00:13:23.068 "zoned": false, 00:13:23.068 "supported_io_types": { 00:13:23.068 "read": true, 00:13:23.068 "write": true, 00:13:23.068 "unmap": true, 00:13:23.068 "write_zeroes": true, 00:13:23.068 "flush": true, 00:13:23.068 "reset": true, 00:13:23.068 "compare": false, 00:13:23.068 "compare_and_write": false, 00:13:23.068 "abort": true, 00:13:23.068 "nvme_admin": false, 00:13:23.068 "nvme_io": false 00:13:23.068 }, 00:13:23.068 "memory_domains": [ 00:13:23.068 { 00:13:23.068 "dma_device_id": "system", 00:13:23.068 "dma_device_type": 1 00:13:23.068 }, 00:13:23.068 { 00:13:23.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.068 "dma_device_type": 2 00:13:23.068 } 00:13:23.068 ], 00:13:23.068 "driver_specific": {} 00:13:23.068 } 00:13:23.068 ]' 00:13:23.068 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:13:23.068 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:13:23.068 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:13:23.327 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:13:23.327 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:13:23.327 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:13:23.327 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:23.327 13:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.704 13:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.705 13:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:13:24.705 13:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.705 13:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:24.705 13:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:26.608 13:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:26.866 13:41:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:27.434 13:41:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:28.370 ************************************ 00:13:28.370 START TEST filesystem_in_capsule_ext4 00:13:28.370 ************************************ 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:13:28.370 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:13:28.371 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:28.371 mke2fs 1.46.5 (30-Dec-2021) 00:13:28.371 Discarding device blocks: 0/522240 done 00:13:28.371 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:28.371 Filesystem UUID: 897e20c8-0de1-40c9-b14c-e55a4223a4ac 00:13:28.371 Superblock backups stored on blocks: 00:13:28.371 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:28.371 00:13:28.371 Allocating group tables: 0/64 done 00:13:28.371 Writing inode tables: 0/64 done 00:13:28.629 Creating journal (8192 blocks): done 00:13:28.629 Writing superblocks and filesystem accounting information: 0/64 done 00:13:28.629 00:13:28.629 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:13:28.629 13:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1255494 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:28.889 00:13:28.889 real 0m0.546s 00:13:28.889 user 0m0.033s 00:13:28.889 sys 0m0.070s 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:28.889 ************************************ 00:13:28.889 END TEST filesystem_in_capsule_ext4 00:13:28.889 ************************************ 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:28.889 ************************************ 00:13:28.889 START TEST filesystem_in_capsule_btrfs 00:13:28.889 ************************************ 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:13:28.889 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:29.148 btrfs-progs v6.6.2 00:13:29.148 See https://btrfs.readthedocs.io for more information. 00:13:29.148 00:13:29.148 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:29.148 NOTE: several default settings have changed in version 5.15, please make sure 00:13:29.148 this does not affect your deployments: 00:13:29.148 - DUP for metadata (-m dup) 00:13:29.148 - enabled no-holes (-O no-holes) 00:13:29.148 - enabled free-space-tree (-R free-space-tree) 00:13:29.148 00:13:29.148 Label: (null) 00:13:29.148 UUID: 059d7e83-46c2-4f0c-b958-4d1cdf6e9c11 00:13:29.148 Node size: 16384 00:13:29.148 Sector size: 4096 00:13:29.148 Filesystem size: 510.00MiB 00:13:29.148 Block group profiles: 00:13:29.148 Data: single 8.00MiB 00:13:29.148 Metadata: DUP 32.00MiB 00:13:29.148 System: DUP 8.00MiB 00:13:29.148 SSD detected: yes 00:13:29.148 Zoned device: no 00:13:29.148 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:29.148 Runtime features: free-space-tree 00:13:29.148 Checksum: crc32c 00:13:29.148 Number of devices: 1 00:13:29.148 Devices: 00:13:29.148 ID SIZE PATH 00:13:29.148 1 510.00MiB /dev/nvme0n1p1 00:13:29.148 00:13:29.148 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:13:29.148 13:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1255494 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.086 00:13:30.086 real 0m1.073s 00:13:30.086 user 0m0.020s 00:13:30.086 sys 0m0.149s 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.086 ************************************ 00:13:30.086 END TEST filesystem_in_capsule_btrfs 00:13:30.086 ************************************ 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.086 ************************************ 00:13:30.086 START TEST filesystem_in_capsule_xfs 00:13:30.086 ************************************ 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:13:30.086 13:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:30.345 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:30.345 = sectsz=512 attr=2, projid32bit=1 00:13:30.345 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:30.345 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:30.345 data = bsize=4096 blocks=130560, imaxpct=25 00:13:30.345 = sunit=0 swidth=0 blks 00:13:30.345 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:30.345 log =internal log bsize=4096 blocks=16384, version=2 00:13:30.346 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:30.346 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:31.282 Discarding blocks...Done. 00:13:31.282 13:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:13:31.282 13:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:33.817 13:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1255494 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:33.817 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:33.818 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:33.818 00:13:33.818 real 0m3.586s 00:13:33.818 user 0m0.030s 00:13:33.818 sys 0m0.082s 00:13:33.818 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:33.818 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:33.818 ************************************ 00:13:33.818 END TEST filesystem_in_capsule_xfs 00:13:33.818 ************************************ 00:13:33.818 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:33.818 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:33.818 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.077 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1255494 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1255494 ']' 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1255494 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1255494 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1255494' 00:13:34.078 killing process with pid 1255494 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 1255494 00:13:34.078 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 1255494 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:34.647 00:13:34.647 real 0m12.549s 00:13:34.647 user 0m48.603s 00:13:34.647 sys 0m1.932s 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.647 ************************************ 00:13:34.647 END TEST nvmf_filesystem_in_capsule 00:13:34.647 ************************************ 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.647 rmmod nvme_tcp 00:13:34.647 rmmod nvme_fabrics 00:13:34.647 rmmod nvme_keyring 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.647 13:41:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.554 13:41:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.554 00:13:36.554 real 0m36.647s 00:13:36.554 user 1m41.287s 00:13:36.554 sys 0m10.525s 00:13:36.554 13:41:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:36.554 13:41:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:36.554 ************************************ 00:13:36.554 END TEST nvmf_filesystem 00:13:36.554 ************************************ 00:13:36.813 13:41:51 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:36.813 13:41:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:36.813 13:41:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:36.813 13:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.813 ************************************ 00:13:36.813 START TEST nvmf_target_discovery 00:13:36.813 ************************************ 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:36.813 * Looking for test storage... 00:13:36.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.813 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.814 13:41:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:46.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:46.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.832 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:46.833 Found net devices under 0000:af:00.0: cvl_0_0 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:46.833 Found net devices under 0000:af:00.1: cvl_0_1 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:13:46.833 00:13:46.833 --- 10.0.0.2 ping statistics --- 00:13:46.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.833 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:46.833 00:13:46.833 --- 10.0.0.1 ping statistics --- 00:13:46.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.833 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1262258 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1262258 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 1262258 ']' 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:46.833 13:41:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 [2024-06-10 13:41:59.918280] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:13:46.833 [2024-06-10 13:41:59.918340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.833 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.833 [2024-06-10 13:42:00.048636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.833 [2024-06-10 13:42:00.140496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.833 [2024-06-10 13:42:00.140547] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.833 [2024-06-10 13:42:00.140561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.833 [2024-06-10 13:42:00.140574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.833 [2024-06-10 13:42:00.140591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.833 [2024-06-10 13:42:00.140653] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.833 [2024-06-10 13:42:00.140768] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.833 [2024-06-10 13:42:00.140884] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.833 [2024-06-10 13:42:00.140884] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 [2024-06-10 13:42:00.881133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 Null1 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.833 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 [2024-06-10 13:42:00.929487] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 Null2 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 Null3 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 Null4 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:46.834 00:13:46.834 Discovery Log Number of Records 6, Generation counter 6 00:13:46.834 =====Discovery Log Entry 0====== 00:13:46.834 trtype: tcp 00:13:46.834 adrfam: ipv4 00:13:46.834 subtype: current discovery subsystem 00:13:46.834 treq: not required 00:13:46.834 portid: 0 00:13:46.834 trsvcid: 4420 00:13:46.834 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:46.834 traddr: 10.0.0.2 00:13:46.834 eflags: explicit discovery connections, duplicate discovery information 00:13:46.834 sectype: none 00:13:46.834 =====Discovery Log Entry 1====== 00:13:46.834 trtype: tcp 00:13:46.834 adrfam: ipv4 00:13:46.834 subtype: nvme subsystem 00:13:46.834 treq: not required 00:13:46.834 portid: 0 00:13:46.834 trsvcid: 4420 00:13:46.834 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:46.834 traddr: 10.0.0.2 00:13:46.834 eflags: none 00:13:46.834 sectype: none 00:13:46.834 =====Discovery Log Entry 2====== 00:13:46.834 trtype: tcp 00:13:46.834 adrfam: ipv4 00:13:46.834 subtype: nvme subsystem 00:13:46.834 treq: not required 00:13:46.834 portid: 0 00:13:46.834 trsvcid: 4420 00:13:46.834 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:46.834 traddr: 10.0.0.2 00:13:46.834 eflags: none 00:13:46.834 sectype: none 00:13:46.834 =====Discovery Log Entry 3====== 00:13:46.834 trtype: tcp 00:13:46.834 adrfam: ipv4 00:13:46.834 subtype: nvme subsystem 00:13:46.834 treq: not required 00:13:46.834 portid: 0 00:13:46.834 trsvcid: 4420 00:13:46.834 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:46.834 traddr: 10.0.0.2 00:13:46.834 eflags: none 00:13:46.834 sectype: none 00:13:46.834 =====Discovery Log Entry 4====== 00:13:46.834 trtype: tcp 00:13:46.834 adrfam: ipv4 00:13:46.834 subtype: nvme subsystem 00:13:46.834 treq: not required 00:13:46.834 portid: 0 00:13:46.834 trsvcid: 4420 00:13:46.834 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:46.834 traddr: 10.0.0.2 00:13:46.834 eflags: none 00:13:46.834 sectype: none 00:13:46.834 =====Discovery Log Entry 5====== 00:13:46.834 trtype: tcp 00:13:46.834 adrfam: ipv4 00:13:46.834 subtype: discovery subsystem referral 00:13:46.834 treq: not required 00:13:46.834 portid: 0 00:13:46.834 trsvcid: 4430 00:13:46.834 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:46.834 traddr: 10.0.0.2 00:13:46.834 eflags: none 00:13:46.834 sectype: none 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:46.834 Perform nvmf subsystem discovery via RPC 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.834 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.834 [ 00:13:46.834 { 00:13:46.834 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:46.834 "subtype": "Discovery", 00:13:46.834 "listen_addresses": [ 00:13:46.834 { 00:13:46.834 "trtype": "TCP", 00:13:46.834 "adrfam": "IPv4", 00:13:46.834 "traddr": "10.0.0.2", 00:13:46.834 "trsvcid": "4420" 00:13:46.834 } 00:13:46.834 ], 00:13:46.834 "allow_any_host": true, 00:13:46.834 "hosts": [] 00:13:46.834 }, 00:13:46.834 { 00:13:46.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.834 "subtype": "NVMe", 00:13:46.834 "listen_addresses": [ 00:13:46.834 { 00:13:46.834 "trtype": "TCP", 00:13:46.834 "adrfam": "IPv4", 00:13:46.834 "traddr": "10.0.0.2", 00:13:46.834 "trsvcid": "4420" 00:13:46.834 } 00:13:46.834 ], 00:13:46.834 "allow_any_host": true, 00:13:46.834 "hosts": [], 00:13:46.834 "serial_number": "SPDK00000000000001", 00:13:46.834 "model_number": "SPDK bdev Controller", 00:13:46.834 "max_namespaces": 32, 00:13:46.834 "min_cntlid": 1, 00:13:46.834 "max_cntlid": 65519, 00:13:46.834 "namespaces": [ 00:13:46.834 { 00:13:46.834 "nsid": 1, 00:13:46.834 "bdev_name": "Null1", 00:13:46.834 "name": "Null1", 00:13:46.834 "nguid": "484E94E196674CCEB4F5D2FAAC71E016", 00:13:46.835 "uuid": "484e94e1-9667-4cce-b4f5-d2faac71e016" 00:13:46.835 } 00:13:46.835 ] 00:13:46.835 }, 00:13:46.835 { 00:13:46.835 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:46.835 "subtype": "NVMe", 00:13:46.835 "listen_addresses": [ 00:13:46.835 { 00:13:46.835 "trtype": "TCP", 00:13:46.835 "adrfam": "IPv4", 00:13:46.835 "traddr": "10.0.0.2", 00:13:46.835 "trsvcid": "4420" 00:13:46.835 } 00:13:46.835 ], 00:13:46.835 "allow_any_host": true, 00:13:46.835 "hosts": [], 00:13:46.835 "serial_number": "SPDK00000000000002", 00:13:46.835 "model_number": "SPDK bdev Controller", 00:13:46.835 "max_namespaces": 32, 00:13:46.835 "min_cntlid": 1, 00:13:46.835 "max_cntlid": 65519, 00:13:46.835 "namespaces": [ 00:13:46.835 { 00:13:46.835 "nsid": 1, 00:13:46.835 "bdev_name": "Null2", 00:13:46.835 "name": "Null2", 00:13:46.835 "nguid": "D1FAE6D89DF54658B837FA0C8FBD4D3E", 00:13:46.835 "uuid": "d1fae6d8-9df5-4658-b837-fa0c8fbd4d3e" 00:13:46.835 } 00:13:46.835 ] 00:13:46.835 }, 00:13:46.835 { 00:13:46.835 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:46.835 "subtype": "NVMe", 00:13:46.835 "listen_addresses": [ 00:13:46.835 { 00:13:46.835 "trtype": "TCP", 00:13:46.835 "adrfam": "IPv4", 00:13:46.835 "traddr": "10.0.0.2", 00:13:46.835 "trsvcid": "4420" 00:13:46.835 } 00:13:46.835 ], 00:13:46.835 "allow_any_host": true, 00:13:46.835 "hosts": [], 00:13:46.835 "serial_number": "SPDK00000000000003", 00:13:46.835 "model_number": "SPDK bdev Controller", 00:13:46.835 "max_namespaces": 32, 00:13:46.835 "min_cntlid": 1, 00:13:46.835 "max_cntlid": 65519, 00:13:46.835 "namespaces": [ 00:13:46.835 { 00:13:46.835 "nsid": 1, 00:13:46.835 "bdev_name": "Null3", 00:13:46.835 "name": "Null3", 00:13:46.835 "nguid": "BCF089264CD94DE4A8F5397BB9B572F8", 00:13:46.835 "uuid": "bcf08926-4cd9-4de4-a8f5-397bb9b572f8" 00:13:46.835 } 00:13:46.835 ] 00:13:46.835 }, 00:13:46.835 { 00:13:46.835 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:46.835 "subtype": "NVMe", 00:13:46.835 "listen_addresses": [ 00:13:46.835 { 00:13:46.835 "trtype": "TCP", 00:13:46.835 "adrfam": "IPv4", 00:13:46.835 "traddr": "10.0.0.2", 00:13:46.835 "trsvcid": "4420" 00:13:46.835 } 00:13:46.835 ], 00:13:46.835 "allow_any_host": true, 00:13:46.835 "hosts": [], 00:13:46.835 "serial_number": "SPDK00000000000004", 00:13:46.835 "model_number": "SPDK bdev Controller", 00:13:46.835 "max_namespaces": 32, 00:13:46.835 "min_cntlid": 1, 00:13:46.835 "max_cntlid": 65519, 00:13:46.835 "namespaces": [ 00:13:46.835 { 00:13:46.835 "nsid": 1, 00:13:46.835 "bdev_name": "Null4", 00:13:46.835 "name": "Null4", 00:13:46.835 "nguid": "8E603D9D08AA41BCBDD96C0B45239FCE", 00:13:46.835 "uuid": "8e603d9d-08aa-41bc-bdd9-6c0b45239fce" 00:13:46.835 } 00:13:46.835 ] 00:13:46.835 } 00:13:46.835 ] 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.835 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.094 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.094 rmmod nvme_tcp 00:13:47.094 rmmod nvme_fabrics 00:13:47.095 rmmod nvme_keyring 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1262258 ']' 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1262258 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 1262258 ']' 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 1262258 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1262258 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1262258' 00:13:47.095 killing process with pid 1262258 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 1262258 00:13:47.095 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 1262258 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.354 13:42:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.897 13:42:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.897 00:13:49.897 real 0m12.693s 00:13:49.897 user 0m8.718s 00:13:49.897 sys 0m7.191s 00:13:49.897 13:42:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:49.897 13:42:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.897 ************************************ 00:13:49.897 END TEST nvmf_target_discovery 00:13:49.897 ************************************ 00:13:49.897 13:42:03 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:49.897 13:42:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:49.897 13:42:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:49.897 13:42:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.897 ************************************ 00:13:49.897 START TEST nvmf_referrals 00:13:49.897 ************************************ 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:49.897 * Looking for test storage... 00:13:49.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.897 13:42:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.897 13:42:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.032 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:58.033 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:58.033 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:58.033 Found net devices under 0000:af:00.0: cvl_0_0 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:58.033 Found net devices under 0000:af:00.1: cvl_0_1 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.033 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.292 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.292 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.292 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.292 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.292 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.551 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.551 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.551 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:13:58.551 00:13:58.551 --- 10.0.0.2 ping statistics --- 00:13:58.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.551 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:13:58.552 00:13:58.552 --- 10.0.0.1 ping statistics --- 00:13:58.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.552 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1267700 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1267700 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 1267700 ']' 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:58.552 13:42:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.552 [2024-06-10 13:42:12.914166] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:13:58.552 [2024-06-10 13:42:12.914225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.552 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.811 [2024-06-10 13:42:13.040283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.811 [2024-06-10 13:42:13.125761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.811 [2024-06-10 13:42:13.125809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.811 [2024-06-10 13:42:13.125822] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.811 [2024-06-10 13:42:13.125834] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.811 [2024-06-10 13:42:13.125849] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.811 [2024-06-10 13:42:13.125901] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.811 [2024-06-10 13:42:13.125995] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.811 [2024-06-10 13:42:13.126104] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.811 [2024-06-10 13:42:13.126104] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.381 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:59.381 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:13:59.381 13:42:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.381 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:59.381 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 [2024-06-10 13:42:13.878123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 [2024-06-10 13:42:13.894382] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:59.640 13:42:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:59.640 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:59.910 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.170 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.430 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:00.690 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.691 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.691 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.691 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.691 13:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.691 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:00.951 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.211 rmmod nvme_tcp 00:14:01.211 rmmod nvme_fabrics 00:14:01.211 rmmod nvme_keyring 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1267700 ']' 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1267700 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 1267700 ']' 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 1267700 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1267700 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1267700' 00:14:01.211 killing process with pid 1267700 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 1267700 00:14:01.211 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 1267700 00:14:01.470 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.471 13:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.012 13:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.012 00:14:04.012 real 0m13.970s 00:14:04.012 user 0m13.850s 00:14:04.012 sys 0m7.630s 00:14:04.012 13:42:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:04.012 13:42:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.012 ************************************ 00:14:04.012 END TEST nvmf_referrals 00:14:04.012 ************************************ 00:14:04.012 13:42:17 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:04.012 13:42:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:04.012 13:42:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:04.012 13:42:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.012 ************************************ 00:14:04.012 START TEST nvmf_connect_disconnect 00:14:04.012 ************************************ 00:14:04.012 13:42:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:04.012 * Looking for test storage... 00:14:04.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.012 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.013 13:42:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:12.146 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:12.146 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:12.146 Found net devices under 0000:af:00.0: cvl_0_0 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:12.146 Found net devices under 0000:af:00.1: cvl_0_1 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.146 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:14:12.147 00:14:12.147 --- 10.0.0.2 ping statistics --- 00:14:12.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.147 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:14:12.147 00:14:12.147 --- 10.0.0.1 ping statistics --- 00:14:12.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.147 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1272578 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1272578 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 1272578 ']' 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:12.147 13:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.407 [2024-06-10 13:42:26.617755] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:14:12.407 [2024-06-10 13:42:26.617814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.407 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.407 [2024-06-10 13:42:26.746069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.407 [2024-06-10 13:42:26.833343] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.407 [2024-06-10 13:42:26.833391] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.407 [2024-06-10 13:42:26.833405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.407 [2024-06-10 13:42:26.833417] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.407 [2024-06-10 13:42:26.833427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.407 [2024-06-10 13:42:26.833524] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.407 [2024-06-10 13:42:26.833635] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.407 [2024-06-10 13:42:26.833680] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.407 [2024-06-10 13:42:26.833681] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.347 [2024-06-10 13:42:27.583732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.347 [2024-06-10 13:42:27.639703] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:13.347 13:42:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:16.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.758 rmmod nvme_tcp 00:14:30.758 rmmod nvme_fabrics 00:14:30.758 rmmod nvme_keyring 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1272578 ']' 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1272578 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1272578 ']' 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 1272578 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:30.758 13:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1272578 00:14:30.758 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:30.758 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:30.758 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1272578' 00:14:30.758 killing process with pid 1272578 00:14:30.758 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 1272578 00:14:30.758 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 1272578 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.018 13:42:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.925 13:42:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.925 00:14:32.925 real 0m29.398s 00:14:32.925 user 1m14.756s 00:14:32.925 sys 0m8.719s 00:14:32.925 13:42:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:32.925 13:42:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:32.925 ************************************ 00:14:32.925 END TEST nvmf_connect_disconnect 00:14:32.925 ************************************ 00:14:32.925 13:42:47 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:32.925 13:42:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:32.925 13:42:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:32.925 13:42:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.184 ************************************ 00:14:33.184 START TEST nvmf_multitarget 00:14:33.185 ************************************ 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:33.185 * Looking for test storage... 00:14:33.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.185 13:42:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:43.171 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:43.171 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:43.171 Found net devices under 0000:af:00.0: cvl_0_0 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:43.171 Found net devices under 0000:af:00.1: cvl_0_1 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.171 13:42:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.171 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:43.171 00:14:43.171 --- 10.0.0.2 ping statistics --- 00:14:43.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.172 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:14:43.172 00:14:43.172 --- 10.0.0.1 ping statistics --- 00:14:43.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.172 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1280320 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1280320 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 1280320 ']' 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:43.172 13:42:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:43.172 [2024-06-10 13:42:56.342832] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:14:43.172 [2024-06-10 13:42:56.342890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.172 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.172 [2024-06-10 13:42:56.470130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.172 [2024-06-10 13:42:56.555667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.172 [2024-06-10 13:42:56.555714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.172 [2024-06-10 13:42:56.555727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.172 [2024-06-10 13:42:56.555743] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.172 [2024-06-10 13:42:56.555753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.172 [2024-06-10 13:42:56.555807] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.172 [2024-06-10 13:42:56.555919] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.172 [2024-06-10 13:42:56.556030] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.172 [2024-06-10 13:42:56.556030] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:43.172 "nvmf_tgt_1" 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:43.172 "nvmf_tgt_2" 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:43.172 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:43.431 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:43.431 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:43.431 true 00:14:43.431 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:43.690 true 00:14:43.690 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:43.690 13:42:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.690 rmmod nvme_tcp 00:14:43.690 rmmod nvme_fabrics 00:14:43.690 rmmod nvme_keyring 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1280320 ']' 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1280320 00:14:43.690 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 1280320 ']' 00:14:43.691 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 1280320 00:14:43.691 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:14:43.691 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:43.691 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1280320 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1280320' 00:14:43.950 killing process with pid 1280320 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 1280320 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 1280320 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.950 13:42:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.487 13:43:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.487 00:14:46.487 real 0m13.047s 00:14:46.487 user 0m10.403s 00:14:46.487 sys 0m7.304s 00:14:46.487 13:43:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:46.487 13:43:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:46.487 ************************************ 00:14:46.487 END TEST nvmf_multitarget 00:14:46.487 ************************************ 00:14:46.487 13:43:00 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:46.487 13:43:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:46.487 13:43:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:46.487 13:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.487 ************************************ 00:14:46.487 START TEST nvmf_rpc 00:14:46.487 ************************************ 00:14:46.487 13:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:46.487 * Looking for test storage... 00:14:46.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.488 13:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.471 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:56.472 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:56.472 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:56.472 Found net devices under 0000:af:00.0: cvl_0_0 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:56.472 Found net devices under 0000:af:00.1: cvl_0_1 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:14:56.472 00:14:56.472 --- 10.0.0.2 ping statistics --- 00:14:56.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.472 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:14:56.472 00:14:56.472 --- 10.0.0.1 ping statistics --- 00:14:56.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.472 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1285312 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1285312 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 1285312 ']' 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:56.472 13:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.472 [2024-06-10 13:43:09.568354] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:14:56.472 [2024-06-10 13:43:09.568413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.472 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.472 [2024-06-10 13:43:09.696515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.472 [2024-06-10 13:43:09.781605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.472 [2024-06-10 13:43:09.781650] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.472 [2024-06-10 13:43:09.781664] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.472 [2024-06-10 13:43:09.781679] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.472 [2024-06-10 13:43:09.781689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.472 [2024-06-10 13:43:09.781741] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.472 [2024-06-10 13:43:09.781763] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.472 [2024-06-10 13:43:09.781899] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.472 [2024-06-10 13:43:09.781899] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.472 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:56.473 "tick_rate": 2500000000, 00:14:56.473 "poll_groups": [ 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_000", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [] 00:14:56.473 }, 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_001", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [] 00:14:56.473 }, 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_002", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [] 00:14:56.473 }, 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_003", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [] 00:14:56.473 } 00:14:56.473 ] 00:14:56.473 }' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 [2024-06-10 13:43:10.593100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:56.473 "tick_rate": 2500000000, 00:14:56.473 "poll_groups": [ 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_000", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [ 00:14:56.473 { 00:14:56.473 "trtype": "TCP" 00:14:56.473 } 00:14:56.473 ] 00:14:56.473 }, 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_001", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [ 00:14:56.473 { 00:14:56.473 "trtype": "TCP" 00:14:56.473 } 00:14:56.473 ] 00:14:56.473 }, 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_002", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [ 00:14:56.473 { 00:14:56.473 "trtype": "TCP" 00:14:56.473 } 00:14:56.473 ] 00:14:56.473 }, 00:14:56.473 { 00:14:56.473 "name": "nvmf_tgt_poll_group_003", 00:14:56.473 "admin_qpairs": 0, 00:14:56.473 "io_qpairs": 0, 00:14:56.473 "current_admin_qpairs": 0, 00:14:56.473 "current_io_qpairs": 0, 00:14:56.473 "pending_bdev_io": 0, 00:14:56.473 "completed_nvme_io": 0, 00:14:56.473 "transports": [ 00:14:56.473 { 00:14:56.473 "trtype": "TCP" 00:14:56.473 } 00:14:56.473 ] 00:14:56.473 } 00:14:56.473 ] 00:14:56.473 }' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 Malloc1 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.473 [2024-06-10 13:43:10.781553] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:56.473 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:56.474 [2024-06-10 13:43:10.810280] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562' 00:14:56.474 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:56.474 could not add new controller: failed to write to nvme-fabrics device 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.474 13:43:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:57.850 13:43:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:57.850 13:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:14:57.850 13:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.850 13:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:14:57.850 13:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:14:59.755 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:59.755 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:59.755 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:15:00.015 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.015 [2024-06-10 13:43:14.475422] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562' 00:15:00.274 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:00.275 could not add new controller: failed to write to nvme-fabrics device 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.275 13:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.655 13:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.655 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:01.655 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.655 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:01.655 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:03.563 13:43:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.563 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.822 [2024-06-10 13:43:18.048770] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.822 13:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.201 13:43:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.201 13:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:05.201 13:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.201 13:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:05.201 13:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 [2024-06-10 13:43:21.554723] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.176 13:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.553 13:43:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.553 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:08.553 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.553 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:08.553 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:11.088 13:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.088 [2024-06-10 13:43:25.143813] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.088 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.468 13:43:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.468 13:43:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:12.468 13:43:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.468 13:43:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:12.468 13:43:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 [2024-06-10 13:43:28.689978] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.374 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.754 13:43:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.754 13:43:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:15.754 13:43:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.754 13:43:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:15.754 13:43:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:17.661 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:17.661 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:17.661 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.661 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:17.661 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.661 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:17.661 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.920 [2024-06-10 13:43:32.203247] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.920 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.300 13:43:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.300 13:43:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:19.300 13:43:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.300 13:43:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:19.300 13:43:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:21.208 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 [2024-06-10 13:43:35.764145] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 [2024-06-10 13:43:35.812267] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.468 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 [2024-06-10 13:43:35.864436] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 [2024-06-10 13:43:35.912621] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.469 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 [2024-06-10 13:43:35.960786] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:21.729 13:43:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:21.729 "tick_rate": 2500000000, 00:15:21.729 "poll_groups": [ 00:15:21.729 { 00:15:21.729 "name": "nvmf_tgt_poll_group_000", 00:15:21.729 "admin_qpairs": 2, 00:15:21.729 "io_qpairs": 196, 00:15:21.729 "current_admin_qpairs": 0, 00:15:21.729 "current_io_qpairs": 0, 00:15:21.729 "pending_bdev_io": 0, 00:15:21.729 "completed_nvme_io": 247, 00:15:21.729 "transports": [ 00:15:21.729 { 00:15:21.729 "trtype": "TCP" 00:15:21.729 } 00:15:21.729 ] 00:15:21.729 }, 00:15:21.729 { 00:15:21.729 "name": "nvmf_tgt_poll_group_001", 00:15:21.729 "admin_qpairs": 2, 00:15:21.729 "io_qpairs": 196, 00:15:21.729 "current_admin_qpairs": 0, 00:15:21.729 "current_io_qpairs": 0, 00:15:21.729 "pending_bdev_io": 0, 00:15:21.729 "completed_nvme_io": 294, 00:15:21.729 "transports": [ 00:15:21.729 { 00:15:21.729 "trtype": "TCP" 00:15:21.729 } 00:15:21.729 ] 00:15:21.729 }, 00:15:21.729 { 00:15:21.729 "name": "nvmf_tgt_poll_group_002", 00:15:21.729 "admin_qpairs": 1, 00:15:21.729 "io_qpairs": 196, 00:15:21.729 "current_admin_qpairs": 0, 00:15:21.729 "current_io_qpairs": 0, 00:15:21.729 "pending_bdev_io": 0, 00:15:21.729 "completed_nvme_io": 283, 00:15:21.729 "transports": [ 00:15:21.729 { 00:15:21.729 "trtype": "TCP" 00:15:21.729 } 00:15:21.729 ] 00:15:21.729 }, 00:15:21.729 { 00:15:21.729 "name": "nvmf_tgt_poll_group_003", 00:15:21.729 "admin_qpairs": 2, 00:15:21.729 "io_qpairs": 196, 00:15:21.729 "current_admin_qpairs": 0, 00:15:21.729 "current_io_qpairs": 0, 00:15:21.729 "pending_bdev_io": 0, 00:15:21.729 "completed_nvme_io": 310, 00:15:21.729 "transports": [ 00:15:21.729 { 00:15:21.729 "trtype": "TCP" 00:15:21.729 } 00:15:21.729 ] 00:15:21.729 } 00:15:21.729 ] 00:15:21.729 }' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.729 rmmod nvme_tcp 00:15:21.729 rmmod nvme_fabrics 00:15:21.729 rmmod nvme_keyring 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1285312 ']' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1285312 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 1285312 ']' 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 1285312 00:15:21.729 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:15:21.730 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:21.730 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1285312 00:15:21.989 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:21.989 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:21.989 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1285312' 00:15:21.989 killing process with pid 1285312 00:15:21.989 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 1285312 00:15:21.989 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 1285312 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.249 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.157 13:43:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:24.157 00:15:24.157 real 0m37.987s 00:15:24.157 user 1m47.698s 00:15:24.157 sys 0m10.014s 00:15:24.157 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:24.157 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.157 ************************************ 00:15:24.157 END TEST nvmf_rpc 00:15:24.157 ************************************ 00:15:24.157 13:43:38 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:24.157 13:43:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:24.157 13:43:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:24.157 13:43:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:24.417 ************************************ 00:15:24.417 START TEST nvmf_invalid 00:15:24.417 ************************************ 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:24.417 * Looking for test storage... 00:15:24.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.417 13:43:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:24.418 13:43:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:34.402 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:34.402 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.402 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:34.403 Found net devices under 0000:af:00.0: cvl_0_0 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:34.403 Found net devices under 0000:af:00.1: cvl_0_1 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:15:34.403 00:15:34.403 --- 10.0.0.2 ping statistics --- 00:15:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.403 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:15:34.403 00:15:34.403 --- 10.0.0.1 ping statistics --- 00:15:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.403 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1294436 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1294436 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 1294436 ']' 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:34.403 13:43:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:34.403 [2024-06-10 13:43:47.519676] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:15:34.403 [2024-06-10 13:43:47.519735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.403 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.403 [2024-06-10 13:43:47.647367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.403 [2024-06-10 13:43:47.733545] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.403 [2024-06-10 13:43:47.733591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.403 [2024-06-10 13:43:47.733605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.403 [2024-06-10 13:43:47.733616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.403 [2024-06-10 13:43:47.733626] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.403 [2024-06-10 13:43:47.733678] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.403 [2024-06-10 13:43:47.733770] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.403 [2024-06-10 13:43:47.733881] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.403 [2024-06-10 13:43:47.733881] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4339 00:15:34.403 [2024-06-10 13:43:48.680405] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:34.403 { 00:15:34.403 "nqn": "nqn.2016-06.io.spdk:cnode4339", 00:15:34.403 "tgt_name": "foobar", 00:15:34.403 "method": "nvmf_create_subsystem", 00:15:34.403 "req_id": 1 00:15:34.403 } 00:15:34.403 Got JSON-RPC error response 00:15:34.403 response: 00:15:34.403 { 00:15:34.403 "code": -32603, 00:15:34.403 "message": "Unable to find target foobar" 00:15:34.403 }' 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:34.403 { 00:15:34.403 "nqn": "nqn.2016-06.io.spdk:cnode4339", 00:15:34.403 "tgt_name": "foobar", 00:15:34.403 "method": "nvmf_create_subsystem", 00:15:34.403 "req_id": 1 00:15:34.403 } 00:15:34.403 Got JSON-RPC error response 00:15:34.403 response: 00:15:34.403 { 00:15:34.403 "code": -32603, 00:15:34.403 "message": "Unable to find target foobar" 00:15:34.403 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:34.403 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18048 00:15:34.663 [2024-06-10 13:43:48.929347] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18048: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:34.663 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:34.663 { 00:15:34.663 "nqn": "nqn.2016-06.io.spdk:cnode18048", 00:15:34.663 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:34.663 "method": "nvmf_create_subsystem", 00:15:34.663 "req_id": 1 00:15:34.663 } 00:15:34.663 Got JSON-RPC error response 00:15:34.663 response: 00:15:34.663 { 00:15:34.663 "code": -32602, 00:15:34.663 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:34.663 }' 00:15:34.663 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:34.663 { 00:15:34.663 "nqn": "nqn.2016-06.io.spdk:cnode18048", 00:15:34.663 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:34.663 "method": "nvmf_create_subsystem", 00:15:34.663 "req_id": 1 00:15:34.663 } 00:15:34.663 Got JSON-RPC error response 00:15:34.663 response: 00:15:34.663 { 00:15:34.663 "code": -32602, 00:15:34.663 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:34.663 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:34.663 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:34.663 13:43:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14264 00:15:34.922 [2024-06-10 13:43:49.174147] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14264: invalid model number 'SPDK_Controller' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:34.923 { 00:15:34.923 "nqn": "nqn.2016-06.io.spdk:cnode14264", 00:15:34.923 "model_number": "SPDK_Controller\u001f", 00:15:34.923 "method": "nvmf_create_subsystem", 00:15:34.923 "req_id": 1 00:15:34.923 } 00:15:34.923 Got JSON-RPC error response 00:15:34.923 response: 00:15:34.923 { 00:15:34.923 "code": -32602, 00:15:34.923 "message": "Invalid MN SPDK_Controller\u001f" 00:15:34.923 }' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:34.923 { 00:15:34.923 "nqn": "nqn.2016-06.io.spdk:cnode14264", 00:15:34.923 "model_number": "SPDK_Controller\u001f", 00:15:34.923 "method": "nvmf_create_subsystem", 00:15:34.923 "req_id": 1 00:15:34.923 } 00:15:34.923 Got JSON-RPC error response 00:15:34.923 response: 00:15:34.923 { 00:15:34.923 "code": -32602, 00:15:34.923 "message": "Invalid MN SPDK_Controller\u001f" 00:15:34.923 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:34.923 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:34.924 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:34.924 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:34.924 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:15:34.924 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'XZo{SY|Yp9#2h~3UR# MX' 00:15:34.924 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'XZo{SY|Yp9#2h~3UR# MX' nqn.2016-06.io.spdk:cnode19001 00:15:35.183 [2024-06-10 13:43:49.579622] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19001: invalid serial number 'XZo{SY|Yp9#2h~3UR# MX' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:35.183 { 00:15:35.183 "nqn": "nqn.2016-06.io.spdk:cnode19001", 00:15:35.183 "serial_number": "XZo{SY|Yp9#2h~3UR# MX", 00:15:35.183 "method": "nvmf_create_subsystem", 00:15:35.183 "req_id": 1 00:15:35.183 } 00:15:35.183 Got JSON-RPC error response 00:15:35.183 response: 00:15:35.183 { 00:15:35.183 "code": -32602, 00:15:35.183 "message": "Invalid SN XZo{SY|Yp9#2h~3UR# MX" 00:15:35.183 }' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:35.183 { 00:15:35.183 "nqn": "nqn.2016-06.io.spdk:cnode19001", 00:15:35.183 "serial_number": "XZo{SY|Yp9#2h~3UR# MX", 00:15:35.183 "method": "nvmf_create_subsystem", 00:15:35.183 "req_id": 1 00:15:35.183 } 00:15:35.183 Got JSON-RPC error response 00:15:35.183 response: 00:15:35.183 { 00:15:35.183 "code": -32602, 00:15:35.183 "message": "Invalid SN XZo{SY|Yp9#2h~3UR# MX" 00:15:35.183 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:35.183 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:35.443 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.444 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';3Xx\-bmNE[V@mO8I>\|su;:l2<;f5T!A[u8QmsWt' 00:15:35.703 13:43:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';3Xx\-bmNE[V@mO8I>\|su;:l2<;f5T!A[u8QmsWt' nqn.2016-06.io.spdk:cnode5419 00:15:35.703 [2024-06-10 13:43:50.149669] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5419: invalid model number ';3Xx\-bmNE[V@mO8I>\|su;:l2<;f5T!A[u8QmsWt' 00:15:35.962 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:35.962 { 00:15:35.962 "nqn": "nqn.2016-06.io.spdk:cnode5419", 00:15:35.962 "model_number": ";3Xx\\-bmNE[V@mO8I>\\|su;:l2<;f5T!A[u8QmsWt", 00:15:35.962 "method": "nvmf_create_subsystem", 00:15:35.962 "req_id": 1 00:15:35.962 } 00:15:35.962 Got JSON-RPC error response 00:15:35.962 response: 00:15:35.962 { 00:15:35.962 "code": -32602, 00:15:35.962 "message": "Invalid MN ;3Xx\\-bmNE[V@mO8I>\\|su;:l2<;f5T!A[u8QmsWt" 00:15:35.962 }' 00:15:35.962 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:35.962 { 00:15:35.962 "nqn": "nqn.2016-06.io.spdk:cnode5419", 00:15:35.962 "model_number": ";3Xx\\-bmNE[V@mO8I>\\|su;:l2<;f5T!A[u8QmsWt", 00:15:35.962 "method": "nvmf_create_subsystem", 00:15:35.962 "req_id": 1 00:15:35.962 } 00:15:35.962 Got JSON-RPC error response 00:15:35.962 response: 00:15:35.962 { 00:15:35.962 "code": -32602, 00:15:35.962 "message": "Invalid MN ;3Xx\\-bmNE[V@mO8I>\\|su;:l2<;f5T!A[u8QmsWt" 00:15:35.962 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:35.962 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:35.962 [2024-06-10 13:43:50.390602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.962 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:36.221 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:36.221 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:36.221 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:36.221 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:36.221 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:36.480 [2024-06-10 13:43:50.824178] nvmf_rpc.c: 805:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:36.480 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:36.480 { 00:15:36.480 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:36.480 "listen_address": { 00:15:36.480 "trtype": "tcp", 00:15:36.480 "traddr": "", 00:15:36.480 "trsvcid": "4421" 00:15:36.480 }, 00:15:36.480 "method": "nvmf_subsystem_remove_listener", 00:15:36.480 "req_id": 1 00:15:36.480 } 00:15:36.480 Got JSON-RPC error response 00:15:36.480 response: 00:15:36.480 { 00:15:36.480 "code": -32602, 00:15:36.480 "message": "Invalid parameters" 00:15:36.480 }' 00:15:36.480 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:36.480 { 00:15:36.480 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:36.480 "listen_address": { 00:15:36.480 "trtype": "tcp", 00:15:36.480 "traddr": "", 00:15:36.480 "trsvcid": "4421" 00:15:36.480 }, 00:15:36.480 "method": "nvmf_subsystem_remove_listener", 00:15:36.480 "req_id": 1 00:15:36.480 } 00:15:36.480 Got JSON-RPC error response 00:15:36.480 response: 00:15:36.480 { 00:15:36.480 "code": -32602, 00:15:36.480 "message": "Invalid parameters" 00:15:36.480 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:36.480 13:43:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13013 -i 0 00:15:36.739 [2024-06-10 13:43:51.068943] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13013: invalid cntlid range [0-65519] 00:15:36.739 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:36.739 { 00:15:36.739 "nqn": "nqn.2016-06.io.spdk:cnode13013", 00:15:36.739 "min_cntlid": 0, 00:15:36.739 "method": "nvmf_create_subsystem", 00:15:36.739 "req_id": 1 00:15:36.739 } 00:15:36.739 Got JSON-RPC error response 00:15:36.739 response: 00:15:36.739 { 00:15:36.739 "code": -32602, 00:15:36.739 "message": "Invalid cntlid range [0-65519]" 00:15:36.739 }' 00:15:36.739 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:36.739 { 00:15:36.739 "nqn": "nqn.2016-06.io.spdk:cnode13013", 00:15:36.739 "min_cntlid": 0, 00:15:36.739 "method": "nvmf_create_subsystem", 00:15:36.739 "req_id": 1 00:15:36.739 } 00:15:36.739 Got JSON-RPC error response 00:15:36.739 response: 00:15:36.739 { 00:15:36.739 "code": -32602, 00:15:36.739 "message": "Invalid cntlid range [0-65519]" 00:15:36.739 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:36.739 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31528 -i 65520 00:15:36.998 [2024-06-10 13:43:51.305744] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31528: invalid cntlid range [65520-65519] 00:15:36.998 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:36.998 { 00:15:36.998 "nqn": "nqn.2016-06.io.spdk:cnode31528", 00:15:36.998 "min_cntlid": 65520, 00:15:36.998 "method": "nvmf_create_subsystem", 00:15:36.998 "req_id": 1 00:15:36.998 } 00:15:36.998 Got JSON-RPC error response 00:15:36.998 response: 00:15:36.998 { 00:15:36.998 "code": -32602, 00:15:36.998 "message": "Invalid cntlid range [65520-65519]" 00:15:36.998 }' 00:15:36.998 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:36.998 { 00:15:36.998 "nqn": "nqn.2016-06.io.spdk:cnode31528", 00:15:36.998 "min_cntlid": 65520, 00:15:36.998 "method": "nvmf_create_subsystem", 00:15:36.998 "req_id": 1 00:15:36.998 } 00:15:36.998 Got JSON-RPC error response 00:15:36.998 response: 00:15:36.998 { 00:15:36.998 "code": -32602, 00:15:36.998 "message": "Invalid cntlid range [65520-65519]" 00:15:36.998 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:36.998 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30949 -I 0 00:15:37.256 [2024-06-10 13:43:51.538559] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30949: invalid cntlid range [1-0] 00:15:37.256 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:37.256 { 00:15:37.256 "nqn": "nqn.2016-06.io.spdk:cnode30949", 00:15:37.256 "max_cntlid": 0, 00:15:37.256 "method": "nvmf_create_subsystem", 00:15:37.256 "req_id": 1 00:15:37.256 } 00:15:37.256 Got JSON-RPC error response 00:15:37.256 response: 00:15:37.256 { 00:15:37.256 "code": -32602, 00:15:37.256 "message": "Invalid cntlid range [1-0]" 00:15:37.256 }' 00:15:37.257 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:37.257 { 00:15:37.257 "nqn": "nqn.2016-06.io.spdk:cnode30949", 00:15:37.257 "max_cntlid": 0, 00:15:37.257 "method": "nvmf_create_subsystem", 00:15:37.257 "req_id": 1 00:15:37.257 } 00:15:37.257 Got JSON-RPC error response 00:15:37.257 response: 00:15:37.257 { 00:15:37.257 "code": -32602, 00:15:37.257 "message": "Invalid cntlid range [1-0]" 00:15:37.257 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:37.257 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16268 -I 65520 00:15:37.515 [2024-06-10 13:43:51.779404] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16268: invalid cntlid range [1-65520] 00:15:37.515 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:37.515 { 00:15:37.515 "nqn": "nqn.2016-06.io.spdk:cnode16268", 00:15:37.515 "max_cntlid": 65520, 00:15:37.515 "method": "nvmf_create_subsystem", 00:15:37.515 "req_id": 1 00:15:37.515 } 00:15:37.515 Got JSON-RPC error response 00:15:37.515 response: 00:15:37.515 { 00:15:37.515 "code": -32602, 00:15:37.515 "message": "Invalid cntlid range [1-65520]" 00:15:37.515 }' 00:15:37.515 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:37.515 { 00:15:37.515 "nqn": "nqn.2016-06.io.spdk:cnode16268", 00:15:37.515 "max_cntlid": 65520, 00:15:37.515 "method": "nvmf_create_subsystem", 00:15:37.515 "req_id": 1 00:15:37.515 } 00:15:37.515 Got JSON-RPC error response 00:15:37.515 response: 00:15:37.515 { 00:15:37.515 "code": -32602, 00:15:37.515 "message": "Invalid cntlid range [1-65520]" 00:15:37.515 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:37.515 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24443 -i 6 -I 5 00:15:37.773 [2024-06-10 13:43:52.020265] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24443: invalid cntlid range [6-5] 00:15:37.773 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:37.773 { 00:15:37.773 "nqn": "nqn.2016-06.io.spdk:cnode24443", 00:15:37.773 "min_cntlid": 6, 00:15:37.773 "max_cntlid": 5, 00:15:37.773 "method": "nvmf_create_subsystem", 00:15:37.773 "req_id": 1 00:15:37.773 } 00:15:37.773 Got JSON-RPC error response 00:15:37.773 response: 00:15:37.773 { 00:15:37.773 "code": -32602, 00:15:37.773 "message": "Invalid cntlid range [6-5]" 00:15:37.773 }' 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:37.774 { 00:15:37.774 "nqn": "nqn.2016-06.io.spdk:cnode24443", 00:15:37.774 "min_cntlid": 6, 00:15:37.774 "max_cntlid": 5, 00:15:37.774 "method": "nvmf_create_subsystem", 00:15:37.774 "req_id": 1 00:15:37.774 } 00:15:37.774 Got JSON-RPC error response 00:15:37.774 response: 00:15:37.774 { 00:15:37.774 "code": -32602, 00:15:37.774 "message": "Invalid cntlid range [6-5]" 00:15:37.774 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:37.774 { 00:15:37.774 "name": "foobar", 00:15:37.774 "method": "nvmf_delete_target", 00:15:37.774 "req_id": 1 00:15:37.774 } 00:15:37.774 Got JSON-RPC error response 00:15:37.774 response: 00:15:37.774 { 00:15:37.774 "code": -32602, 00:15:37.774 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:37.774 }' 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:37.774 { 00:15:37.774 "name": "foobar", 00:15:37.774 "method": "nvmf_delete_target", 00:15:37.774 "req_id": 1 00:15:37.774 } 00:15:37.774 Got JSON-RPC error response 00:15:37.774 response: 00:15:37.774 { 00:15:37.774 "code": -32602, 00:15:37.774 "message": "The specified target doesn't exist, cannot delete it." 00:15:37.774 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.774 rmmod nvme_tcp 00:15:37.774 rmmod nvme_fabrics 00:15:37.774 rmmod nvme_keyring 00:15:37.774 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1294436 ']' 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1294436 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 1294436 ']' 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 1294436 00:15:38.032 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1294436 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1294436' 00:15:38.033 killing process with pid 1294436 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 1294436 00:15:38.033 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 1294436 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.291 13:43:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.254 13:43:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.254 00:15:40.254 real 0m15.966s 00:15:40.254 user 0m24.138s 00:15:40.254 sys 0m8.037s 00:15:40.254 13:43:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:40.254 13:43:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.254 ************************************ 00:15:40.254 END TEST nvmf_invalid 00:15:40.254 ************************************ 00:15:40.254 13:43:54 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:40.254 13:43:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:40.254 13:43:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:40.254 13:43:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.254 ************************************ 00:15:40.254 START TEST nvmf_abort 00:15:40.254 ************************************ 00:15:40.254 13:43:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:40.513 * Looking for test storage... 00:15:40.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.513 13:43:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.634 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:48.635 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.635 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:48.894 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:48.894 Found net devices under 0000:af:00.0: cvl_0_0 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:48.894 Found net devices under 0000:af:00.1: cvl_0_1 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.894 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:15:49.154 00:15:49.154 --- 10.0.0.2 ping statistics --- 00:15:49.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.154 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:15:49.154 00:15:49.154 --- 10.0.0.1 ping statistics --- 00:15:49.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.154 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1299856 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1299856 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 1299856 ']' 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:49.154 13:44:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:49.154 [2024-06-10 13:44:03.527616] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:15:49.154 [2024-06-10 13:44:03.527684] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.154 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.414 [2024-06-10 13:44:03.644877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:49.414 [2024-06-10 13:44:03.729671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.414 [2024-06-10 13:44:03.729715] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.414 [2024-06-10 13:44:03.729729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.414 [2024-06-10 13:44:03.729741] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.414 [2024-06-10 13:44:03.729751] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.414 [2024-06-10 13:44:03.729860] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.414 [2024-06-10 13:44:03.729978] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.414 [2024-06-10 13:44:03.729978] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.981 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:49.981 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:15:49.981 13:44:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.981 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:49.981 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 [2024-06-10 13:44:04.491779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 Malloc0 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 Delay0 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 [2024-06-10 13:44:04.570184] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.240 13:44:04 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:50.240 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.240 [2024-06-10 13:44:04.698609] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:52.775 Initializing NVMe Controllers 00:15:52.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:52.775 controller IO queue size 128 less than required 00:15:52.775 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:52.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:52.775 Initialization complete. Launching workers. 00:15:52.775 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30694 00:15:52.775 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30755, failed to submit 62 00:15:52.775 success 30698, unsuccess 57, failed 0 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.775 rmmod nvme_tcp 00:15:52.775 rmmod nvme_fabrics 00:15:52.775 rmmod nvme_keyring 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1299856 ']' 00:15:52.775 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1299856 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 1299856 ']' 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 1299856 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1299856 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1299856' 00:15:52.776 killing process with pid 1299856 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 1299856 00:15:52.776 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 1299856 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.776 13:44:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.310 13:44:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:55.310 00:15:55.310 real 0m14.508s 00:15:55.310 user 0m14.008s 00:15:55.310 sys 0m7.823s 00:15:55.310 13:44:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:55.310 13:44:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:55.310 ************************************ 00:15:55.310 END TEST nvmf_abort 00:15:55.310 ************************************ 00:15:55.310 13:44:09 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:55.310 13:44:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:55.310 13:44:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:55.310 13:44:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.310 ************************************ 00:15:55.310 START TEST nvmf_ns_hotplug_stress 00:15:55.310 ************************************ 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:55.310 * Looking for test storage... 00:15:55.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.310 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:55.311 13:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.433 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.433 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:03.433 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:03.433 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:03.433 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:03.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:03.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:03.434 Found net devices under 0000:af:00.0: cvl_0_0 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:03.434 Found net devices under 0000:af:00.1: cvl_0_1 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.434 13:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.693 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.693 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.693 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:03.693 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.693 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.693 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.952 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:03.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:16:03.952 00:16:03.952 --- 10.0.0.2 ping statistics --- 00:16:03.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.952 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:03.952 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:16:03.952 00:16:03.953 --- 10.0.0.1 ping statistics --- 00:16:03.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.953 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1304938 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1304938 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 1304938 ']' 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:03.953 13:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.953 [2024-06-10 13:44:18.293252] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:16:03.953 [2024-06-10 13:44:18.293315] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.953 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.953 [2024-06-10 13:44:18.411917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.212 [2024-06-10 13:44:18.497880] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.212 [2024-06-10 13:44:18.497926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.212 [2024-06-10 13:44:18.497940] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.212 [2024-06-10 13:44:18.497952] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.212 [2024-06-10 13:44:18.497963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.212 [2024-06-10 13:44:18.498070] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.212 [2024-06-10 13:44:18.498185] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.212 [2024-06-10 13:44:18.498185] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:04.779 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:05.038 [2024-06-10 13:44:19.339022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.038 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:05.297 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.297 [2024-06-10 13:44:19.701787] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.297 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:05.556 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:05.814 Malloc0 00:16:05.814 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:06.072 Delay0 00:16:06.072 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.331 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:06.331 NULL1 00:16:06.331 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:06.590 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:06.590 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1305411 00:16:06.590 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:06.590 13:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.590 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.971 Read completed with error (sct=0, sc=11) 00:16:07.971 13:44:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:07.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.972 13:44:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:07.972 13:44:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:08.230 true 00:16:08.230 13:44:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:08.230 13:44:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.167 13:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.167 13:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:09.167 13:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:09.427 true 00:16:09.427 13:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:09.427 13:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.427 13:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.686 13:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:09.686 13:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:09.945 true 00:16:09.945 13:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:09.945 13:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.142 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.142 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:11.142 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:11.400 true 00:16:11.400 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:11.400 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.337 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.337 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:12.337 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:12.596 true 00:16:12.596 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:12.596 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.855 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.855 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:12.855 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:13.114 true 00:16:13.114 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:13.114 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.373 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.631 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:13.631 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:13.631 true 00:16:13.631 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:13.631 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.890 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.147 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:14.147 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:14.147 true 00:16:14.147 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:14.147 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.605 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.605 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:15.605 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:15.864 true 00:16:15.864 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:15.864 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.801 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.801 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:16.801 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:16.801 true 00:16:17.059 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:17.059 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.059 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.318 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:17.318 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:17.577 true 00:16:17.577 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:17.577 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.962 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.962 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:18.962 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:18.962 true 00:16:18.962 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:18.962 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.900 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.158 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:20.158 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:20.158 true 00:16:20.417 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:20.417 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.417 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.676 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:20.676 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:20.676 true 00:16:20.935 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:20.935 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.935 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.193 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:21.193 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:21.453 true 00:16:21.453 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:21.453 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.390 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.390 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:22.390 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:22.649 true 00:16:22.649 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:22.649 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.908 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.908 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:22.908 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:23.165 true 00:16:23.165 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:23.165 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.543 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.543 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:24.543 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:24.802 true 00:16:24.802 13:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:24.802 13:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.738 13:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.738 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:25.738 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:25.997 true 00:16:25.997 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:25.997 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.257 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.516 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:26.516 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:26.516 true 00:16:26.516 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:26.516 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.712 13:44:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.712 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:27.712 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:27.971 true 00:16:27.971 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:27.971 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.230 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.230 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:28.230 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:28.489 true 00:16:28.489 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:28.489 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.748 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.748 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:28.748 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:29.007 true 00:16:29.007 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:29.007 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.266 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.526 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:29.526 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:29.526 true 00:16:29.526 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:29.526 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.903 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.162 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:31.162 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:31.162 true 00:16:31.162 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:31.162 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.099 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.358 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:32.358 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:32.358 true 00:16:32.358 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:32.358 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.616 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.875 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:32.875 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:32.875 true 00:16:32.875 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:32.875 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.256 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.256 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:34.256 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:34.514 true 00:16:34.514 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:34.514 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.451 13:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.451 13:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:35.451 13:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:35.710 true 00:16:35.710 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:35.710 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.969 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.969 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:35.969 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:36.228 true 00:16:36.228 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:36.228 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.165 Initializing NVMe Controllers 00:16:37.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:37.165 Controller IO queue size 128, less than required. 00:16:37.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:37.165 Controller IO queue size 128, less than required. 00:16:37.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:37.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:37.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:37.165 Initialization complete. Launching workers. 00:16:37.165 ======================================================== 00:16:37.165 Latency(us) 00:16:37.165 Device Information : IOPS MiB/s Average min max 00:16:37.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1455.83 0.71 57688.32 2907.22 1111134.42 00:16:37.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16793.83 8.20 7621.80 2195.52 404479.24 00:16:37.165 ======================================================== 00:16:37.165 Total : 18249.66 8.91 11615.75 2195.52 1111134.42 00:16:37.165 00:16:37.424 13:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.424 13:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:16:37.424 13:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:16:37.683 true 00:16:37.683 13:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1305411 00:16:37.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1305411) - No such process 00:16:37.683 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1305411 00:16:37.683 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.941 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:37.941 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:37.941 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:37.941 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:37.941 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:37.941 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:38.199 null0 00:16:38.199 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:38.199 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:38.199 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:38.510 null1 00:16:38.510 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:38.510 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:38.510 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:38.773 null2 00:16:38.773 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:38.773 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:38.773 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:38.773 null3 00:16:39.032 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:39.032 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:39.032 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:39.032 null4 00:16:39.032 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:39.032 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:39.032 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:39.291 null5 00:16:39.291 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:39.291 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:39.291 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:39.291 null6 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:39.550 null7 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.550 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1311075 1311077 1311078 1311080 1311082 1311084 1311086 1311088 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:39.551 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:39.811 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.070 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:40.329 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.588 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:40.588 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.847 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:40.847 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:40.847 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:40.848 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.106 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.107 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.366 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:41.625 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.625 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:41.625 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:41.625 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.884 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.885 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:41.885 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:41.885 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:41.885 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:42.144 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:42.403 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.404 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:42.663 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:42.663 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.663 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.922 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:43.181 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:43.440 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.699 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:43.957 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:43.958 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:43.958 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.958 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.958 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:43.958 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.216 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.474 rmmod nvme_tcp 00:16:44.474 rmmod nvme_fabrics 00:16:44.474 rmmod nvme_keyring 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1304938 ']' 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1304938 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 1304938 ']' 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 1304938 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1304938 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1304938' 00:16:44.474 killing process with pid 1304938 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 1304938 00:16:44.474 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 1304938 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.734 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.271 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.271 00:16:47.271 real 0m51.867s 00:16:47.271 user 3m17.416s 00:16:47.271 sys 0m23.274s 00:16:47.271 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:47.271 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.271 ************************************ 00:16:47.271 END TEST nvmf_ns_hotplug_stress 00:16:47.271 ************************************ 00:16:47.271 13:45:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:47.271 13:45:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:47.271 13:45:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:47.271 13:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.271 ************************************ 00:16:47.271 START TEST nvmf_connect_stress 00:16:47.271 ************************************ 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:47.271 * Looking for test storage... 00:16:47.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.271 13:45:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:55.402 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:55.402 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:55.402 Found net devices under 0000:af:00.0: cvl_0_0 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.402 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:55.403 Found net devices under 0000:af:00.1: cvl_0_1 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.403 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.662 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.662 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.662 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.662 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.662 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.662 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.662 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:16:55.921 00:16:55.921 --- 10.0.0.2 ping statistics --- 00:16:55.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.921 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:55.921 00:16:55.921 --- 10.0.0.1 ping statistics --- 00:16:55.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.921 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1317277 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1317277 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 1317277 ']' 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:55.921 13:45:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.921 [2024-06-10 13:45:10.293038] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:16:55.921 [2024-06-10 13:45:10.293099] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.921 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.180 [2024-06-10 13:45:10.410420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.180 [2024-06-10 13:45:10.493247] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.180 [2024-06-10 13:45:10.493295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.180 [2024-06-10 13:45:10.493309] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.180 [2024-06-10 13:45:10.493321] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.180 [2024-06-10 13:45:10.493331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.180 [2024-06-10 13:45:10.493458] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.180 [2024-06-10 13:45:10.493571] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.180 [2024-06-10 13:45:10.493572] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.748 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:56.748 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:16:56.748 13:45:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.748 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:56.748 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 [2024-06-10 13:45:11.255387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 [2024-06-10 13:45:11.293727] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.006 NULL1 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1317372 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.006 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.574 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.574 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:57.574 13:45:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.574 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.574 13:45:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.831 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.831 13:45:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:57.831 13:45:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.831 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.831 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.089 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.089 13:45:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:58.089 13:45:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.089 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.089 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.348 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.348 13:45:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:58.348 13:45:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.348 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.348 13:45:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.607 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.607 13:45:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:58.607 13:45:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.607 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.607 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.174 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.174 13:45:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:59.174 13:45:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.174 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.174 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.433 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.433 13:45:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:59.433 13:45:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.433 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.433 13:45:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.692 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.692 13:45:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:59.692 13:45:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.692 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.692 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.951 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.951 13:45:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:16:59.951 13:45:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.951 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.951 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.209 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.209 13:45:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:00.209 13:45:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.209 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.209 13:45:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.777 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.777 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:00.777 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.777 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.777 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.035 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.035 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:01.035 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.035 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.035 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.294 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.294 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:01.294 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.294 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.294 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.553 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.553 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:01.553 13:45:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.553 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.553 13:45:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.122 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.122 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:02.122 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.122 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.122 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.381 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.381 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:02.381 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.381 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.381 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.639 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.639 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:02.639 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.639 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.639 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.898 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.898 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:02.898 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.898 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.898 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.157 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:03.157 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:03.157 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.157 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:03.157 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.725 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:03.725 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:03.725 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.725 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:03.725 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.010 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.010 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:04.010 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.010 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.010 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.269 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.270 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:04.270 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.270 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.270 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.528 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.528 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:04.528 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.528 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.528 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.787 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:04.787 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.787 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.787 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.355 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.355 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:05.355 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.355 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.355 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.614 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.614 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:05.614 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.614 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.614 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.873 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.873 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:05.873 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.873 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.873 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.132 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.132 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:06.132 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.132 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.132 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.392 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.393 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:06.393 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.393 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.393 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.959 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.959 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:06.959 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.959 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.959 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.219 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1317372 00:17:07.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1317372) - No such process 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1317372 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.219 rmmod nvme_tcp 00:17:07.219 rmmod nvme_fabrics 00:17:07.219 rmmod nvme_keyring 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1317277 ']' 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1317277 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 1317277 ']' 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 1317277 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1317277 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1317277' 00:17:07.219 killing process with pid 1317277 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 1317277 00:17:07.219 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 1317277 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.478 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.013 13:45:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.013 00:17:10.013 real 0m22.695s 00:17:10.013 user 0m41.577s 00:17:10.013 sys 0m11.722s 00:17:10.013 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:10.013 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.013 ************************************ 00:17:10.013 END TEST nvmf_connect_stress 00:17:10.013 ************************************ 00:17:10.013 13:45:23 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:10.013 13:45:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:10.013 13:45:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:10.013 13:45:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.013 ************************************ 00:17:10.013 START TEST nvmf_fused_ordering 00:17:10.013 ************************************ 00:17:10.013 13:45:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:10.013 * Looking for test storage... 00:17:10.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.013 13:45:24 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.014 13:45:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:18.136 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:18.136 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.136 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:18.137 Found net devices under 0000:af:00.0: cvl_0_0 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:18.137 Found net devices under 0000:af:00.1: cvl_0_1 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.137 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:17:18.397 00:17:18.397 --- 10.0.0.2 ping statistics --- 00:17:18.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.397 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:17:18.397 00:17:18.397 --- 10.0.0.1 ping statistics --- 00:17:18.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.397 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1323631 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1323631 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 1323631 ']' 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:18.397 13:45:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:18.656 [2024-06-10 13:45:32.881128] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:17:18.656 [2024-06-10 13:45:32.881193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.656 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.656 [2024-06-10 13:45:32.999875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.656 [2024-06-10 13:45:33.080374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.656 [2024-06-10 13:45:33.080423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.656 [2024-06-10 13:45:33.080437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.656 [2024-06-10 13:45:33.080449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.656 [2024-06-10 13:45:33.080459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.656 [2024-06-10 13:45:33.080487] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 [2024-06-10 13:45:33.836999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 [2024-06-10 13:45:33.857209] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 NULL1 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.594 13:45:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:19.594 [2024-06-10 13:45:33.913167] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:17:19.594 [2024-06-10 13:45:33.913209] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323913 ] 00:17:19.594 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.162 Attached to nqn.2016-06.io.spdk:cnode1 00:17:20.162 Namespace ID: 1 size: 1GB 00:17:20.162 fused_ordering(0) 00:17:20.162 fused_ordering(1) 00:17:20.162 fused_ordering(2) 00:17:20.162 fused_ordering(3) 00:17:20.162 fused_ordering(4) 00:17:20.162 fused_ordering(5) 00:17:20.162 fused_ordering(6) 00:17:20.162 fused_ordering(7) 00:17:20.162 fused_ordering(8) 00:17:20.162 fused_ordering(9) 00:17:20.162 fused_ordering(10) 00:17:20.162 fused_ordering(11) 00:17:20.162 fused_ordering(12) 00:17:20.162 fused_ordering(13) 00:17:20.162 fused_ordering(14) 00:17:20.162 fused_ordering(15) 00:17:20.162 fused_ordering(16) 00:17:20.162 fused_ordering(17) 00:17:20.162 fused_ordering(18) 00:17:20.162 fused_ordering(19) 00:17:20.162 fused_ordering(20) 00:17:20.162 fused_ordering(21) 00:17:20.162 fused_ordering(22) 00:17:20.162 fused_ordering(23) 00:17:20.162 fused_ordering(24) 00:17:20.162 fused_ordering(25) 00:17:20.162 fused_ordering(26) 00:17:20.162 fused_ordering(27) 00:17:20.162 fused_ordering(28) 00:17:20.162 fused_ordering(29) 00:17:20.162 fused_ordering(30) 00:17:20.162 fused_ordering(31) 00:17:20.162 fused_ordering(32) 00:17:20.162 fused_ordering(33) 00:17:20.163 fused_ordering(34) 00:17:20.163 fused_ordering(35) 00:17:20.163 fused_ordering(36) 00:17:20.163 fused_ordering(37) 00:17:20.163 fused_ordering(38) 00:17:20.163 fused_ordering(39) 00:17:20.163 fused_ordering(40) 00:17:20.163 fused_ordering(41) 00:17:20.163 fused_ordering(42) 00:17:20.163 fused_ordering(43) 00:17:20.163 fused_ordering(44) 00:17:20.163 fused_ordering(45) 00:17:20.163 fused_ordering(46) 00:17:20.163 fused_ordering(47) 00:17:20.163 fused_ordering(48) 00:17:20.163 fused_ordering(49) 00:17:20.163 fused_ordering(50) 00:17:20.163 fused_ordering(51) 00:17:20.163 fused_ordering(52) 00:17:20.163 fused_ordering(53) 00:17:20.163 fused_ordering(54) 00:17:20.163 fused_ordering(55) 00:17:20.163 fused_ordering(56) 00:17:20.163 fused_ordering(57) 00:17:20.163 fused_ordering(58) 00:17:20.163 fused_ordering(59) 00:17:20.163 fused_ordering(60) 00:17:20.163 fused_ordering(61) 00:17:20.163 fused_ordering(62) 00:17:20.163 fused_ordering(63) 00:17:20.163 fused_ordering(64) 00:17:20.163 fused_ordering(65) 00:17:20.163 fused_ordering(66) 00:17:20.163 fused_ordering(67) 00:17:20.163 fused_ordering(68) 00:17:20.163 fused_ordering(69) 00:17:20.163 fused_ordering(70) 00:17:20.163 fused_ordering(71) 00:17:20.163 fused_ordering(72) 00:17:20.163 fused_ordering(73) 00:17:20.163 fused_ordering(74) 00:17:20.163 fused_ordering(75) 00:17:20.163 fused_ordering(76) 00:17:20.163 fused_ordering(77) 00:17:20.163 fused_ordering(78) 00:17:20.163 fused_ordering(79) 00:17:20.163 fused_ordering(80) 00:17:20.163 fused_ordering(81) 00:17:20.163 fused_ordering(82) 00:17:20.163 fused_ordering(83) 00:17:20.163 fused_ordering(84) 00:17:20.163 fused_ordering(85) 00:17:20.163 fused_ordering(86) 00:17:20.163 fused_ordering(87) 00:17:20.163 fused_ordering(88) 00:17:20.163 fused_ordering(89) 00:17:20.163 fused_ordering(90) 00:17:20.163 fused_ordering(91) 00:17:20.163 fused_ordering(92) 00:17:20.163 fused_ordering(93) 00:17:20.163 fused_ordering(94) 00:17:20.163 fused_ordering(95) 00:17:20.163 fused_ordering(96) 00:17:20.163 fused_ordering(97) 00:17:20.163 fused_ordering(98) 00:17:20.163 fused_ordering(99) 00:17:20.163 fused_ordering(100) 00:17:20.163 fused_ordering(101) 00:17:20.163 fused_ordering(102) 00:17:20.163 fused_ordering(103) 00:17:20.163 fused_ordering(104) 00:17:20.163 fused_ordering(105) 00:17:20.163 fused_ordering(106) 00:17:20.163 fused_ordering(107) 00:17:20.163 fused_ordering(108) 00:17:20.163 fused_ordering(109) 00:17:20.163 fused_ordering(110) 00:17:20.163 fused_ordering(111) 00:17:20.163 fused_ordering(112) 00:17:20.163 fused_ordering(113) 00:17:20.163 fused_ordering(114) 00:17:20.163 fused_ordering(115) 00:17:20.163 fused_ordering(116) 00:17:20.163 fused_ordering(117) 00:17:20.163 fused_ordering(118) 00:17:20.163 fused_ordering(119) 00:17:20.163 fused_ordering(120) 00:17:20.163 fused_ordering(121) 00:17:20.163 fused_ordering(122) 00:17:20.163 fused_ordering(123) 00:17:20.163 fused_ordering(124) 00:17:20.163 fused_ordering(125) 00:17:20.163 fused_ordering(126) 00:17:20.163 fused_ordering(127) 00:17:20.163 fused_ordering(128) 00:17:20.163 fused_ordering(129) 00:17:20.163 fused_ordering(130) 00:17:20.163 fused_ordering(131) 00:17:20.163 fused_ordering(132) 00:17:20.163 fused_ordering(133) 00:17:20.163 fused_ordering(134) 00:17:20.163 fused_ordering(135) 00:17:20.163 fused_ordering(136) 00:17:20.163 fused_ordering(137) 00:17:20.163 fused_ordering(138) 00:17:20.163 fused_ordering(139) 00:17:20.163 fused_ordering(140) 00:17:20.163 fused_ordering(141) 00:17:20.163 fused_ordering(142) 00:17:20.163 fused_ordering(143) 00:17:20.163 fused_ordering(144) 00:17:20.163 fused_ordering(145) 00:17:20.163 fused_ordering(146) 00:17:20.163 fused_ordering(147) 00:17:20.163 fused_ordering(148) 00:17:20.163 fused_ordering(149) 00:17:20.163 fused_ordering(150) 00:17:20.163 fused_ordering(151) 00:17:20.163 fused_ordering(152) 00:17:20.163 fused_ordering(153) 00:17:20.163 fused_ordering(154) 00:17:20.163 fused_ordering(155) 00:17:20.163 fused_ordering(156) 00:17:20.163 fused_ordering(157) 00:17:20.163 fused_ordering(158) 00:17:20.163 fused_ordering(159) 00:17:20.163 fused_ordering(160) 00:17:20.163 fused_ordering(161) 00:17:20.163 fused_ordering(162) 00:17:20.163 fused_ordering(163) 00:17:20.163 fused_ordering(164) 00:17:20.163 fused_ordering(165) 00:17:20.163 fused_ordering(166) 00:17:20.163 fused_ordering(167) 00:17:20.163 fused_ordering(168) 00:17:20.163 fused_ordering(169) 00:17:20.163 fused_ordering(170) 00:17:20.163 fused_ordering(171) 00:17:20.163 fused_ordering(172) 00:17:20.163 fused_ordering(173) 00:17:20.163 fused_ordering(174) 00:17:20.163 fused_ordering(175) 00:17:20.163 fused_ordering(176) 00:17:20.163 fused_ordering(177) 00:17:20.163 fused_ordering(178) 00:17:20.163 fused_ordering(179) 00:17:20.163 fused_ordering(180) 00:17:20.163 fused_ordering(181) 00:17:20.163 fused_ordering(182) 00:17:20.163 fused_ordering(183) 00:17:20.163 fused_ordering(184) 00:17:20.163 fused_ordering(185) 00:17:20.163 fused_ordering(186) 00:17:20.163 fused_ordering(187) 00:17:20.163 fused_ordering(188) 00:17:20.163 fused_ordering(189) 00:17:20.163 fused_ordering(190) 00:17:20.163 fused_ordering(191) 00:17:20.163 fused_ordering(192) 00:17:20.163 fused_ordering(193) 00:17:20.163 fused_ordering(194) 00:17:20.163 fused_ordering(195) 00:17:20.163 fused_ordering(196) 00:17:20.163 fused_ordering(197) 00:17:20.163 fused_ordering(198) 00:17:20.163 fused_ordering(199) 00:17:20.163 fused_ordering(200) 00:17:20.163 fused_ordering(201) 00:17:20.163 fused_ordering(202) 00:17:20.163 fused_ordering(203) 00:17:20.163 fused_ordering(204) 00:17:20.163 fused_ordering(205) 00:17:20.730 fused_ordering(206) 00:17:20.730 fused_ordering(207) 00:17:20.730 fused_ordering(208) 00:17:20.730 fused_ordering(209) 00:17:20.730 fused_ordering(210) 00:17:20.730 fused_ordering(211) 00:17:20.730 fused_ordering(212) 00:17:20.730 fused_ordering(213) 00:17:20.730 fused_ordering(214) 00:17:20.730 fused_ordering(215) 00:17:20.730 fused_ordering(216) 00:17:20.730 fused_ordering(217) 00:17:20.730 fused_ordering(218) 00:17:20.730 fused_ordering(219) 00:17:20.730 fused_ordering(220) 00:17:20.730 fused_ordering(221) 00:17:20.730 fused_ordering(222) 00:17:20.730 fused_ordering(223) 00:17:20.730 fused_ordering(224) 00:17:20.730 fused_ordering(225) 00:17:20.730 fused_ordering(226) 00:17:20.730 fused_ordering(227) 00:17:20.730 fused_ordering(228) 00:17:20.730 fused_ordering(229) 00:17:20.730 fused_ordering(230) 00:17:20.730 fused_ordering(231) 00:17:20.730 fused_ordering(232) 00:17:20.730 fused_ordering(233) 00:17:20.730 fused_ordering(234) 00:17:20.730 fused_ordering(235) 00:17:20.730 fused_ordering(236) 00:17:20.730 fused_ordering(237) 00:17:20.730 fused_ordering(238) 00:17:20.730 fused_ordering(239) 00:17:20.730 fused_ordering(240) 00:17:20.730 fused_ordering(241) 00:17:20.730 fused_ordering(242) 00:17:20.730 fused_ordering(243) 00:17:20.730 fused_ordering(244) 00:17:20.730 fused_ordering(245) 00:17:20.730 fused_ordering(246) 00:17:20.730 fused_ordering(247) 00:17:20.730 fused_ordering(248) 00:17:20.730 fused_ordering(249) 00:17:20.730 fused_ordering(250) 00:17:20.730 fused_ordering(251) 00:17:20.730 fused_ordering(252) 00:17:20.730 fused_ordering(253) 00:17:20.730 fused_ordering(254) 00:17:20.730 fused_ordering(255) 00:17:20.730 fused_ordering(256) 00:17:20.730 fused_ordering(257) 00:17:20.730 fused_ordering(258) 00:17:20.730 fused_ordering(259) 00:17:20.730 fused_ordering(260) 00:17:20.730 fused_ordering(261) 00:17:20.730 fused_ordering(262) 00:17:20.730 fused_ordering(263) 00:17:20.730 fused_ordering(264) 00:17:20.730 fused_ordering(265) 00:17:20.730 fused_ordering(266) 00:17:20.730 fused_ordering(267) 00:17:20.730 fused_ordering(268) 00:17:20.730 fused_ordering(269) 00:17:20.730 fused_ordering(270) 00:17:20.730 fused_ordering(271) 00:17:20.730 fused_ordering(272) 00:17:20.730 fused_ordering(273) 00:17:20.730 fused_ordering(274) 00:17:20.730 fused_ordering(275) 00:17:20.730 fused_ordering(276) 00:17:20.730 fused_ordering(277) 00:17:20.730 fused_ordering(278) 00:17:20.730 fused_ordering(279) 00:17:20.730 fused_ordering(280) 00:17:20.730 fused_ordering(281) 00:17:20.730 fused_ordering(282) 00:17:20.730 fused_ordering(283) 00:17:20.730 fused_ordering(284) 00:17:20.730 fused_ordering(285) 00:17:20.730 fused_ordering(286) 00:17:20.730 fused_ordering(287) 00:17:20.730 fused_ordering(288) 00:17:20.731 fused_ordering(289) 00:17:20.731 fused_ordering(290) 00:17:20.731 fused_ordering(291) 00:17:20.731 fused_ordering(292) 00:17:20.731 fused_ordering(293) 00:17:20.731 fused_ordering(294) 00:17:20.731 fused_ordering(295) 00:17:20.731 fused_ordering(296) 00:17:20.731 fused_ordering(297) 00:17:20.731 fused_ordering(298) 00:17:20.731 fused_ordering(299) 00:17:20.731 fused_ordering(300) 00:17:20.731 fused_ordering(301) 00:17:20.731 fused_ordering(302) 00:17:20.731 fused_ordering(303) 00:17:20.731 fused_ordering(304) 00:17:20.731 fused_ordering(305) 00:17:20.731 fused_ordering(306) 00:17:20.731 fused_ordering(307) 00:17:20.731 fused_ordering(308) 00:17:20.731 fused_ordering(309) 00:17:20.731 fused_ordering(310) 00:17:20.731 fused_ordering(311) 00:17:20.731 fused_ordering(312) 00:17:20.731 fused_ordering(313) 00:17:20.731 fused_ordering(314) 00:17:20.731 fused_ordering(315) 00:17:20.731 fused_ordering(316) 00:17:20.731 fused_ordering(317) 00:17:20.731 fused_ordering(318) 00:17:20.731 fused_ordering(319) 00:17:20.731 fused_ordering(320) 00:17:20.731 fused_ordering(321) 00:17:20.731 fused_ordering(322) 00:17:20.731 fused_ordering(323) 00:17:20.731 fused_ordering(324) 00:17:20.731 fused_ordering(325) 00:17:20.731 fused_ordering(326) 00:17:20.731 fused_ordering(327) 00:17:20.731 fused_ordering(328) 00:17:20.731 fused_ordering(329) 00:17:20.731 fused_ordering(330) 00:17:20.731 fused_ordering(331) 00:17:20.731 fused_ordering(332) 00:17:20.731 fused_ordering(333) 00:17:20.731 fused_ordering(334) 00:17:20.731 fused_ordering(335) 00:17:20.731 fused_ordering(336) 00:17:20.731 fused_ordering(337) 00:17:20.731 fused_ordering(338) 00:17:20.731 fused_ordering(339) 00:17:20.731 fused_ordering(340) 00:17:20.731 fused_ordering(341) 00:17:20.731 fused_ordering(342) 00:17:20.731 fused_ordering(343) 00:17:20.731 fused_ordering(344) 00:17:20.731 fused_ordering(345) 00:17:20.731 fused_ordering(346) 00:17:20.731 fused_ordering(347) 00:17:20.731 fused_ordering(348) 00:17:20.731 fused_ordering(349) 00:17:20.731 fused_ordering(350) 00:17:20.731 fused_ordering(351) 00:17:20.731 fused_ordering(352) 00:17:20.731 fused_ordering(353) 00:17:20.731 fused_ordering(354) 00:17:20.731 fused_ordering(355) 00:17:20.731 fused_ordering(356) 00:17:20.731 fused_ordering(357) 00:17:20.731 fused_ordering(358) 00:17:20.731 fused_ordering(359) 00:17:20.731 fused_ordering(360) 00:17:20.731 fused_ordering(361) 00:17:20.731 fused_ordering(362) 00:17:20.731 fused_ordering(363) 00:17:20.731 fused_ordering(364) 00:17:20.731 fused_ordering(365) 00:17:20.731 fused_ordering(366) 00:17:20.731 fused_ordering(367) 00:17:20.731 fused_ordering(368) 00:17:20.731 fused_ordering(369) 00:17:20.731 fused_ordering(370) 00:17:20.731 fused_ordering(371) 00:17:20.731 fused_ordering(372) 00:17:20.731 fused_ordering(373) 00:17:20.731 fused_ordering(374) 00:17:20.731 fused_ordering(375) 00:17:20.731 fused_ordering(376) 00:17:20.731 fused_ordering(377) 00:17:20.731 fused_ordering(378) 00:17:20.731 fused_ordering(379) 00:17:20.731 fused_ordering(380) 00:17:20.731 fused_ordering(381) 00:17:20.731 fused_ordering(382) 00:17:20.731 fused_ordering(383) 00:17:20.731 fused_ordering(384) 00:17:20.731 fused_ordering(385) 00:17:20.731 fused_ordering(386) 00:17:20.731 fused_ordering(387) 00:17:20.731 fused_ordering(388) 00:17:20.731 fused_ordering(389) 00:17:20.731 fused_ordering(390) 00:17:20.731 fused_ordering(391) 00:17:20.731 fused_ordering(392) 00:17:20.731 fused_ordering(393) 00:17:20.731 fused_ordering(394) 00:17:20.731 fused_ordering(395) 00:17:20.731 fused_ordering(396) 00:17:20.731 fused_ordering(397) 00:17:20.731 fused_ordering(398) 00:17:20.731 fused_ordering(399) 00:17:20.731 fused_ordering(400) 00:17:20.731 fused_ordering(401) 00:17:20.731 fused_ordering(402) 00:17:20.731 fused_ordering(403) 00:17:20.731 fused_ordering(404) 00:17:20.731 fused_ordering(405) 00:17:20.731 fused_ordering(406) 00:17:20.731 fused_ordering(407) 00:17:20.731 fused_ordering(408) 00:17:20.731 fused_ordering(409) 00:17:20.731 fused_ordering(410) 00:17:21.298 fused_ordering(411) 00:17:21.298 fused_ordering(412) 00:17:21.298 fused_ordering(413) 00:17:21.298 fused_ordering(414) 00:17:21.298 fused_ordering(415) 00:17:21.298 fused_ordering(416) 00:17:21.298 fused_ordering(417) 00:17:21.298 fused_ordering(418) 00:17:21.298 fused_ordering(419) 00:17:21.298 fused_ordering(420) 00:17:21.298 fused_ordering(421) 00:17:21.298 fused_ordering(422) 00:17:21.298 fused_ordering(423) 00:17:21.299 fused_ordering(424) 00:17:21.299 fused_ordering(425) 00:17:21.299 fused_ordering(426) 00:17:21.299 fused_ordering(427) 00:17:21.299 fused_ordering(428) 00:17:21.299 fused_ordering(429) 00:17:21.299 fused_ordering(430) 00:17:21.299 fused_ordering(431) 00:17:21.299 fused_ordering(432) 00:17:21.299 fused_ordering(433) 00:17:21.299 fused_ordering(434) 00:17:21.299 fused_ordering(435) 00:17:21.299 fused_ordering(436) 00:17:21.299 fused_ordering(437) 00:17:21.299 fused_ordering(438) 00:17:21.299 fused_ordering(439) 00:17:21.299 fused_ordering(440) 00:17:21.299 fused_ordering(441) 00:17:21.299 fused_ordering(442) 00:17:21.299 fused_ordering(443) 00:17:21.299 fused_ordering(444) 00:17:21.299 fused_ordering(445) 00:17:21.299 fused_ordering(446) 00:17:21.299 fused_ordering(447) 00:17:21.299 fused_ordering(448) 00:17:21.299 fused_ordering(449) 00:17:21.299 fused_ordering(450) 00:17:21.299 fused_ordering(451) 00:17:21.299 fused_ordering(452) 00:17:21.299 fused_ordering(453) 00:17:21.299 fused_ordering(454) 00:17:21.299 fused_ordering(455) 00:17:21.299 fused_ordering(456) 00:17:21.299 fused_ordering(457) 00:17:21.299 fused_ordering(458) 00:17:21.299 fused_ordering(459) 00:17:21.299 fused_ordering(460) 00:17:21.299 fused_ordering(461) 00:17:21.299 fused_ordering(462) 00:17:21.299 fused_ordering(463) 00:17:21.299 fused_ordering(464) 00:17:21.299 fused_ordering(465) 00:17:21.299 fused_ordering(466) 00:17:21.299 fused_ordering(467) 00:17:21.299 fused_ordering(468) 00:17:21.299 fused_ordering(469) 00:17:21.299 fused_ordering(470) 00:17:21.299 fused_ordering(471) 00:17:21.299 fused_ordering(472) 00:17:21.299 fused_ordering(473) 00:17:21.299 fused_ordering(474) 00:17:21.299 fused_ordering(475) 00:17:21.299 fused_ordering(476) 00:17:21.299 fused_ordering(477) 00:17:21.299 fused_ordering(478) 00:17:21.299 fused_ordering(479) 00:17:21.299 fused_ordering(480) 00:17:21.299 fused_ordering(481) 00:17:21.299 fused_ordering(482) 00:17:21.299 fused_ordering(483) 00:17:21.299 fused_ordering(484) 00:17:21.299 fused_ordering(485) 00:17:21.299 fused_ordering(486) 00:17:21.299 fused_ordering(487) 00:17:21.299 fused_ordering(488) 00:17:21.299 fused_ordering(489) 00:17:21.299 fused_ordering(490) 00:17:21.299 fused_ordering(491) 00:17:21.299 fused_ordering(492) 00:17:21.299 fused_ordering(493) 00:17:21.299 fused_ordering(494) 00:17:21.299 fused_ordering(495) 00:17:21.299 fused_ordering(496) 00:17:21.299 fused_ordering(497) 00:17:21.299 fused_ordering(498) 00:17:21.299 fused_ordering(499) 00:17:21.299 fused_ordering(500) 00:17:21.299 fused_ordering(501) 00:17:21.299 fused_ordering(502) 00:17:21.299 fused_ordering(503) 00:17:21.299 fused_ordering(504) 00:17:21.299 fused_ordering(505) 00:17:21.299 fused_ordering(506) 00:17:21.299 fused_ordering(507) 00:17:21.299 fused_ordering(508) 00:17:21.299 fused_ordering(509) 00:17:21.299 fused_ordering(510) 00:17:21.299 fused_ordering(511) 00:17:21.299 fused_ordering(512) 00:17:21.299 fused_ordering(513) 00:17:21.299 fused_ordering(514) 00:17:21.299 fused_ordering(515) 00:17:21.299 fused_ordering(516) 00:17:21.299 fused_ordering(517) 00:17:21.299 fused_ordering(518) 00:17:21.299 fused_ordering(519) 00:17:21.299 fused_ordering(520) 00:17:21.299 fused_ordering(521) 00:17:21.299 fused_ordering(522) 00:17:21.299 fused_ordering(523) 00:17:21.299 fused_ordering(524) 00:17:21.299 fused_ordering(525) 00:17:21.299 fused_ordering(526) 00:17:21.299 fused_ordering(527) 00:17:21.299 fused_ordering(528) 00:17:21.299 fused_ordering(529) 00:17:21.299 fused_ordering(530) 00:17:21.299 fused_ordering(531) 00:17:21.299 fused_ordering(532) 00:17:21.299 fused_ordering(533) 00:17:21.299 fused_ordering(534) 00:17:21.299 fused_ordering(535) 00:17:21.299 fused_ordering(536) 00:17:21.299 fused_ordering(537) 00:17:21.299 fused_ordering(538) 00:17:21.299 fused_ordering(539) 00:17:21.299 fused_ordering(540) 00:17:21.299 fused_ordering(541) 00:17:21.299 fused_ordering(542) 00:17:21.299 fused_ordering(543) 00:17:21.299 fused_ordering(544) 00:17:21.299 fused_ordering(545) 00:17:21.299 fused_ordering(546) 00:17:21.299 fused_ordering(547) 00:17:21.299 fused_ordering(548) 00:17:21.299 fused_ordering(549) 00:17:21.299 fused_ordering(550) 00:17:21.299 fused_ordering(551) 00:17:21.299 fused_ordering(552) 00:17:21.299 fused_ordering(553) 00:17:21.299 fused_ordering(554) 00:17:21.299 fused_ordering(555) 00:17:21.299 fused_ordering(556) 00:17:21.299 fused_ordering(557) 00:17:21.299 fused_ordering(558) 00:17:21.299 fused_ordering(559) 00:17:21.299 fused_ordering(560) 00:17:21.299 fused_ordering(561) 00:17:21.299 fused_ordering(562) 00:17:21.299 fused_ordering(563) 00:17:21.299 fused_ordering(564) 00:17:21.299 fused_ordering(565) 00:17:21.299 fused_ordering(566) 00:17:21.299 fused_ordering(567) 00:17:21.299 fused_ordering(568) 00:17:21.299 fused_ordering(569) 00:17:21.299 fused_ordering(570) 00:17:21.299 fused_ordering(571) 00:17:21.299 fused_ordering(572) 00:17:21.299 fused_ordering(573) 00:17:21.299 fused_ordering(574) 00:17:21.299 fused_ordering(575) 00:17:21.299 fused_ordering(576) 00:17:21.299 fused_ordering(577) 00:17:21.299 fused_ordering(578) 00:17:21.299 fused_ordering(579) 00:17:21.299 fused_ordering(580) 00:17:21.299 fused_ordering(581) 00:17:21.299 fused_ordering(582) 00:17:21.299 fused_ordering(583) 00:17:21.299 fused_ordering(584) 00:17:21.299 fused_ordering(585) 00:17:21.299 fused_ordering(586) 00:17:21.299 fused_ordering(587) 00:17:21.299 fused_ordering(588) 00:17:21.299 fused_ordering(589) 00:17:21.299 fused_ordering(590) 00:17:21.299 fused_ordering(591) 00:17:21.299 fused_ordering(592) 00:17:21.299 fused_ordering(593) 00:17:21.299 fused_ordering(594) 00:17:21.299 fused_ordering(595) 00:17:21.299 fused_ordering(596) 00:17:21.299 fused_ordering(597) 00:17:21.299 fused_ordering(598) 00:17:21.299 fused_ordering(599) 00:17:21.299 fused_ordering(600) 00:17:21.299 fused_ordering(601) 00:17:21.299 fused_ordering(602) 00:17:21.299 fused_ordering(603) 00:17:21.299 fused_ordering(604) 00:17:21.299 fused_ordering(605) 00:17:21.299 fused_ordering(606) 00:17:21.299 fused_ordering(607) 00:17:21.299 fused_ordering(608) 00:17:21.299 fused_ordering(609) 00:17:21.299 fused_ordering(610) 00:17:21.299 fused_ordering(611) 00:17:21.299 fused_ordering(612) 00:17:21.299 fused_ordering(613) 00:17:21.299 fused_ordering(614) 00:17:21.299 fused_ordering(615) 00:17:21.866 fused_ordering(616) 00:17:21.866 fused_ordering(617) 00:17:21.866 fused_ordering(618) 00:17:21.866 fused_ordering(619) 00:17:21.866 fused_ordering(620) 00:17:21.866 fused_ordering(621) 00:17:21.866 fused_ordering(622) 00:17:21.866 fused_ordering(623) 00:17:21.866 fused_ordering(624) 00:17:21.866 fused_ordering(625) 00:17:21.866 fused_ordering(626) 00:17:21.866 fused_ordering(627) 00:17:21.866 fused_ordering(628) 00:17:21.866 fused_ordering(629) 00:17:21.866 fused_ordering(630) 00:17:21.866 fused_ordering(631) 00:17:21.866 fused_ordering(632) 00:17:21.866 fused_ordering(633) 00:17:21.866 fused_ordering(634) 00:17:21.866 fused_ordering(635) 00:17:21.866 fused_ordering(636) 00:17:21.866 fused_ordering(637) 00:17:21.866 fused_ordering(638) 00:17:21.866 fused_ordering(639) 00:17:21.866 fused_ordering(640) 00:17:21.866 fused_ordering(641) 00:17:21.866 fused_ordering(642) 00:17:21.866 fused_ordering(643) 00:17:21.866 fused_ordering(644) 00:17:21.866 fused_ordering(645) 00:17:21.866 fused_ordering(646) 00:17:21.866 fused_ordering(647) 00:17:21.866 fused_ordering(648) 00:17:21.866 fused_ordering(649) 00:17:21.866 fused_ordering(650) 00:17:21.866 fused_ordering(651) 00:17:21.866 fused_ordering(652) 00:17:21.866 fused_ordering(653) 00:17:21.866 fused_ordering(654) 00:17:21.866 fused_ordering(655) 00:17:21.866 fused_ordering(656) 00:17:21.866 fused_ordering(657) 00:17:21.866 fused_ordering(658) 00:17:21.866 fused_ordering(659) 00:17:21.866 fused_ordering(660) 00:17:21.866 fused_ordering(661) 00:17:21.866 fused_ordering(662) 00:17:21.866 fused_ordering(663) 00:17:21.866 fused_ordering(664) 00:17:21.866 fused_ordering(665) 00:17:21.866 fused_ordering(666) 00:17:21.866 fused_ordering(667) 00:17:21.866 fused_ordering(668) 00:17:21.866 fused_ordering(669) 00:17:21.866 fused_ordering(670) 00:17:21.866 fused_ordering(671) 00:17:21.866 fused_ordering(672) 00:17:21.866 fused_ordering(673) 00:17:21.866 fused_ordering(674) 00:17:21.867 fused_ordering(675) 00:17:21.867 fused_ordering(676) 00:17:21.867 fused_ordering(677) 00:17:21.867 fused_ordering(678) 00:17:21.867 fused_ordering(679) 00:17:21.867 fused_ordering(680) 00:17:21.867 fused_ordering(681) 00:17:21.867 fused_ordering(682) 00:17:21.867 fused_ordering(683) 00:17:21.867 fused_ordering(684) 00:17:21.867 fused_ordering(685) 00:17:21.867 fused_ordering(686) 00:17:21.867 fused_ordering(687) 00:17:21.867 fused_ordering(688) 00:17:21.867 fused_ordering(689) 00:17:21.867 fused_ordering(690) 00:17:21.867 fused_ordering(691) 00:17:21.867 fused_ordering(692) 00:17:21.867 fused_ordering(693) 00:17:21.867 fused_ordering(694) 00:17:21.867 fused_ordering(695) 00:17:21.867 fused_ordering(696) 00:17:21.867 fused_ordering(697) 00:17:21.867 fused_ordering(698) 00:17:21.867 fused_ordering(699) 00:17:21.867 fused_ordering(700) 00:17:21.867 fused_ordering(701) 00:17:21.867 fused_ordering(702) 00:17:21.867 fused_ordering(703) 00:17:21.867 fused_ordering(704) 00:17:21.867 fused_ordering(705) 00:17:21.867 fused_ordering(706) 00:17:21.867 fused_ordering(707) 00:17:21.867 fused_ordering(708) 00:17:21.867 fused_ordering(709) 00:17:21.867 fused_ordering(710) 00:17:21.867 fused_ordering(711) 00:17:21.867 fused_ordering(712) 00:17:21.867 fused_ordering(713) 00:17:21.867 fused_ordering(714) 00:17:21.867 fused_ordering(715) 00:17:21.867 fused_ordering(716) 00:17:21.867 fused_ordering(717) 00:17:21.867 fused_ordering(718) 00:17:21.867 fused_ordering(719) 00:17:21.867 fused_ordering(720) 00:17:21.867 fused_ordering(721) 00:17:21.867 fused_ordering(722) 00:17:21.867 fused_ordering(723) 00:17:21.867 fused_ordering(724) 00:17:21.867 fused_ordering(725) 00:17:21.867 fused_ordering(726) 00:17:21.867 fused_ordering(727) 00:17:21.867 fused_ordering(728) 00:17:21.867 fused_ordering(729) 00:17:21.867 fused_ordering(730) 00:17:21.867 fused_ordering(731) 00:17:21.867 fused_ordering(732) 00:17:21.867 fused_ordering(733) 00:17:21.867 fused_ordering(734) 00:17:21.867 fused_ordering(735) 00:17:21.867 fused_ordering(736) 00:17:21.867 fused_ordering(737) 00:17:21.867 fused_ordering(738) 00:17:21.867 fused_ordering(739) 00:17:21.867 fused_ordering(740) 00:17:21.867 fused_ordering(741) 00:17:21.867 fused_ordering(742) 00:17:21.867 fused_ordering(743) 00:17:21.867 fused_ordering(744) 00:17:21.867 fused_ordering(745) 00:17:21.867 fused_ordering(746) 00:17:21.867 fused_ordering(747) 00:17:21.867 fused_ordering(748) 00:17:21.867 fused_ordering(749) 00:17:21.867 fused_ordering(750) 00:17:21.867 fused_ordering(751) 00:17:21.867 fused_ordering(752) 00:17:21.867 fused_ordering(753) 00:17:21.867 fused_ordering(754) 00:17:21.867 fused_ordering(755) 00:17:21.867 fused_ordering(756) 00:17:21.867 fused_ordering(757) 00:17:21.867 fused_ordering(758) 00:17:21.867 fused_ordering(759) 00:17:21.867 fused_ordering(760) 00:17:21.867 fused_ordering(761) 00:17:21.867 fused_ordering(762) 00:17:21.867 fused_ordering(763) 00:17:21.867 fused_ordering(764) 00:17:21.867 fused_ordering(765) 00:17:21.867 fused_ordering(766) 00:17:21.867 fused_ordering(767) 00:17:21.867 fused_ordering(768) 00:17:21.867 fused_ordering(769) 00:17:21.867 fused_ordering(770) 00:17:21.867 fused_ordering(771) 00:17:21.867 fused_ordering(772) 00:17:21.867 fused_ordering(773) 00:17:21.867 fused_ordering(774) 00:17:21.867 fused_ordering(775) 00:17:21.867 fused_ordering(776) 00:17:21.867 fused_ordering(777) 00:17:21.867 fused_ordering(778) 00:17:21.867 fused_ordering(779) 00:17:21.867 fused_ordering(780) 00:17:21.867 fused_ordering(781) 00:17:21.867 fused_ordering(782) 00:17:21.867 fused_ordering(783) 00:17:21.867 fused_ordering(784) 00:17:21.867 fused_ordering(785) 00:17:21.867 fused_ordering(786) 00:17:21.867 fused_ordering(787) 00:17:21.867 fused_ordering(788) 00:17:21.867 fused_ordering(789) 00:17:21.867 fused_ordering(790) 00:17:21.867 fused_ordering(791) 00:17:21.867 fused_ordering(792) 00:17:21.867 fused_ordering(793) 00:17:21.867 fused_ordering(794) 00:17:21.867 fused_ordering(795) 00:17:21.867 fused_ordering(796) 00:17:21.867 fused_ordering(797) 00:17:21.867 fused_ordering(798) 00:17:21.867 fused_ordering(799) 00:17:21.867 fused_ordering(800) 00:17:21.867 fused_ordering(801) 00:17:21.867 fused_ordering(802) 00:17:21.867 fused_ordering(803) 00:17:21.867 fused_ordering(804) 00:17:21.867 fused_ordering(805) 00:17:21.867 fused_ordering(806) 00:17:21.867 fused_ordering(807) 00:17:21.867 fused_ordering(808) 00:17:21.867 fused_ordering(809) 00:17:21.867 fused_ordering(810) 00:17:21.867 fused_ordering(811) 00:17:21.867 fused_ordering(812) 00:17:21.867 fused_ordering(813) 00:17:21.867 fused_ordering(814) 00:17:21.867 fused_ordering(815) 00:17:21.867 fused_ordering(816) 00:17:21.867 fused_ordering(817) 00:17:21.867 fused_ordering(818) 00:17:21.867 fused_ordering(819) 00:17:21.867 fused_ordering(820) 00:17:22.802 fused_ordering(821) 00:17:22.802 fused_ordering(822) 00:17:22.802 fused_ordering(823) 00:17:22.802 fused_ordering(824) 00:17:22.802 fused_ordering(825) 00:17:22.802 fused_ordering(826) 00:17:22.802 fused_ordering(827) 00:17:22.802 fused_ordering(828) 00:17:22.802 fused_ordering(829) 00:17:22.802 fused_ordering(830) 00:17:22.802 fused_ordering(831) 00:17:22.802 fused_ordering(832) 00:17:22.802 fused_ordering(833) 00:17:22.802 fused_ordering(834) 00:17:22.802 fused_ordering(835) 00:17:22.802 fused_ordering(836) 00:17:22.802 fused_ordering(837) 00:17:22.802 fused_ordering(838) 00:17:22.802 fused_ordering(839) 00:17:22.802 fused_ordering(840) 00:17:22.802 fused_ordering(841) 00:17:22.802 fused_ordering(842) 00:17:22.802 fused_ordering(843) 00:17:22.802 fused_ordering(844) 00:17:22.802 fused_ordering(845) 00:17:22.802 fused_ordering(846) 00:17:22.802 fused_ordering(847) 00:17:22.802 fused_ordering(848) 00:17:22.802 fused_ordering(849) 00:17:22.802 fused_ordering(850) 00:17:22.802 fused_ordering(851) 00:17:22.802 fused_ordering(852) 00:17:22.802 fused_ordering(853) 00:17:22.802 fused_ordering(854) 00:17:22.802 fused_ordering(855) 00:17:22.802 fused_ordering(856) 00:17:22.802 fused_ordering(857) 00:17:22.802 fused_ordering(858) 00:17:22.802 fused_ordering(859) 00:17:22.802 fused_ordering(860) 00:17:22.802 fused_ordering(861) 00:17:22.802 fused_ordering(862) 00:17:22.802 fused_ordering(863) 00:17:22.802 fused_ordering(864) 00:17:22.802 fused_ordering(865) 00:17:22.802 fused_ordering(866) 00:17:22.802 fused_ordering(867) 00:17:22.802 fused_ordering(868) 00:17:22.802 fused_ordering(869) 00:17:22.802 fused_ordering(870) 00:17:22.802 fused_ordering(871) 00:17:22.802 fused_ordering(872) 00:17:22.802 fused_ordering(873) 00:17:22.802 fused_ordering(874) 00:17:22.802 fused_ordering(875) 00:17:22.802 fused_ordering(876) 00:17:22.802 fused_ordering(877) 00:17:22.802 fused_ordering(878) 00:17:22.802 fused_ordering(879) 00:17:22.802 fused_ordering(880) 00:17:22.802 fused_ordering(881) 00:17:22.802 fused_ordering(882) 00:17:22.802 fused_ordering(883) 00:17:22.802 fused_ordering(884) 00:17:22.802 fused_ordering(885) 00:17:22.802 fused_ordering(886) 00:17:22.802 fused_ordering(887) 00:17:22.802 fused_ordering(888) 00:17:22.802 fused_ordering(889) 00:17:22.802 fused_ordering(890) 00:17:22.802 fused_ordering(891) 00:17:22.802 fused_ordering(892) 00:17:22.802 fused_ordering(893) 00:17:22.802 fused_ordering(894) 00:17:22.802 fused_ordering(895) 00:17:22.802 fused_ordering(896) 00:17:22.802 fused_ordering(897) 00:17:22.802 fused_ordering(898) 00:17:22.802 fused_ordering(899) 00:17:22.802 fused_ordering(900) 00:17:22.802 fused_ordering(901) 00:17:22.802 fused_ordering(902) 00:17:22.802 fused_ordering(903) 00:17:22.802 fused_ordering(904) 00:17:22.802 fused_ordering(905) 00:17:22.802 fused_ordering(906) 00:17:22.802 fused_ordering(907) 00:17:22.802 fused_ordering(908) 00:17:22.802 fused_ordering(909) 00:17:22.802 fused_ordering(910) 00:17:22.802 fused_ordering(911) 00:17:22.802 fused_ordering(912) 00:17:22.802 fused_ordering(913) 00:17:22.802 fused_ordering(914) 00:17:22.802 fused_ordering(915) 00:17:22.802 fused_ordering(916) 00:17:22.802 fused_ordering(917) 00:17:22.802 fused_ordering(918) 00:17:22.802 fused_ordering(919) 00:17:22.802 fused_ordering(920) 00:17:22.802 fused_ordering(921) 00:17:22.802 fused_ordering(922) 00:17:22.802 fused_ordering(923) 00:17:22.802 fused_ordering(924) 00:17:22.802 fused_ordering(925) 00:17:22.802 fused_ordering(926) 00:17:22.802 fused_ordering(927) 00:17:22.802 fused_ordering(928) 00:17:22.802 fused_ordering(929) 00:17:22.802 fused_ordering(930) 00:17:22.802 fused_ordering(931) 00:17:22.802 fused_ordering(932) 00:17:22.802 fused_ordering(933) 00:17:22.802 fused_ordering(934) 00:17:22.802 fused_ordering(935) 00:17:22.802 fused_ordering(936) 00:17:22.802 fused_ordering(937) 00:17:22.802 fused_ordering(938) 00:17:22.802 fused_ordering(939) 00:17:22.802 fused_ordering(940) 00:17:22.802 fused_ordering(941) 00:17:22.802 fused_ordering(942) 00:17:22.802 fused_ordering(943) 00:17:22.802 fused_ordering(944) 00:17:22.802 fused_ordering(945) 00:17:22.802 fused_ordering(946) 00:17:22.802 fused_ordering(947) 00:17:22.802 fused_ordering(948) 00:17:22.802 fused_ordering(949) 00:17:22.802 fused_ordering(950) 00:17:22.802 fused_ordering(951) 00:17:22.802 fused_ordering(952) 00:17:22.802 fused_ordering(953) 00:17:22.802 fused_ordering(954) 00:17:22.802 fused_ordering(955) 00:17:22.802 fused_ordering(956) 00:17:22.802 fused_ordering(957) 00:17:22.802 fused_ordering(958) 00:17:22.802 fused_ordering(959) 00:17:22.802 fused_ordering(960) 00:17:22.802 fused_ordering(961) 00:17:22.802 fused_ordering(962) 00:17:22.802 fused_ordering(963) 00:17:22.802 fused_ordering(964) 00:17:22.802 fused_ordering(965) 00:17:22.802 fused_ordering(966) 00:17:22.802 fused_ordering(967) 00:17:22.802 fused_ordering(968) 00:17:22.802 fused_ordering(969) 00:17:22.802 fused_ordering(970) 00:17:22.802 fused_ordering(971) 00:17:22.802 fused_ordering(972) 00:17:22.802 fused_ordering(973) 00:17:22.802 fused_ordering(974) 00:17:22.802 fused_ordering(975) 00:17:22.802 fused_ordering(976) 00:17:22.802 fused_ordering(977) 00:17:22.802 fused_ordering(978) 00:17:22.802 fused_ordering(979) 00:17:22.802 fused_ordering(980) 00:17:22.802 fused_ordering(981) 00:17:22.802 fused_ordering(982) 00:17:22.802 fused_ordering(983) 00:17:22.802 fused_ordering(984) 00:17:22.802 fused_ordering(985) 00:17:22.802 fused_ordering(986) 00:17:22.802 fused_ordering(987) 00:17:22.802 fused_ordering(988) 00:17:22.802 fused_ordering(989) 00:17:22.802 fused_ordering(990) 00:17:22.802 fused_ordering(991) 00:17:22.802 fused_ordering(992) 00:17:22.802 fused_ordering(993) 00:17:22.802 fused_ordering(994) 00:17:22.802 fused_ordering(995) 00:17:22.802 fused_ordering(996) 00:17:22.802 fused_ordering(997) 00:17:22.802 fused_ordering(998) 00:17:22.802 fused_ordering(999) 00:17:22.802 fused_ordering(1000) 00:17:22.802 fused_ordering(1001) 00:17:22.802 fused_ordering(1002) 00:17:22.802 fused_ordering(1003) 00:17:22.802 fused_ordering(1004) 00:17:22.802 fused_ordering(1005) 00:17:22.802 fused_ordering(1006) 00:17:22.802 fused_ordering(1007) 00:17:22.802 fused_ordering(1008) 00:17:22.802 fused_ordering(1009) 00:17:22.802 fused_ordering(1010) 00:17:22.802 fused_ordering(1011) 00:17:22.802 fused_ordering(1012) 00:17:22.802 fused_ordering(1013) 00:17:22.802 fused_ordering(1014) 00:17:22.802 fused_ordering(1015) 00:17:22.802 fused_ordering(1016) 00:17:22.802 fused_ordering(1017) 00:17:22.802 fused_ordering(1018) 00:17:22.802 fused_ordering(1019) 00:17:22.802 fused_ordering(1020) 00:17:22.802 fused_ordering(1021) 00:17:22.802 fused_ordering(1022) 00:17:22.802 fused_ordering(1023) 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.802 13:45:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.802 rmmod nvme_tcp 00:17:22.802 rmmod nvme_fabrics 00:17:22.802 rmmod nvme_keyring 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1323631 ']' 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1323631 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 1323631 ']' 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 1323631 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1323631 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1323631' 00:17:22.802 killing process with pid 1323631 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 1323631 00:17:22.802 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 1323631 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.062 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.968 13:45:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.968 00:17:24.968 real 0m15.411s 00:17:24.968 user 0m8.079s 00:17:24.968 sys 0m9.037s 00:17:24.968 13:45:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:24.968 13:45:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.968 ************************************ 00:17:24.968 END TEST nvmf_fused_ordering 00:17:24.968 ************************************ 00:17:25.228 13:45:39 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:25.228 13:45:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:25.228 13:45:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:25.228 13:45:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.228 ************************************ 00:17:25.228 START TEST nvmf_delete_subsystem 00:17:25.228 ************************************ 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:25.228 * Looking for test storage... 00:17:25.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.228 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.229 13:45:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:33.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:33.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:33.354 Found net devices under 0000:af:00.0: cvl_0_0 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:33.354 Found net devices under 0000:af:00.1: cvl_0_1 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.354 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:17:33.614 00:17:33.614 --- 10.0.0.2 ping statistics --- 00:17:33.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.614 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:17:33.614 00:17:33.614 --- 10.0.0.1 ping statistics --- 00:17:33.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.614 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1328883 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1328883 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 1328883 ']' 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:33.614 13:45:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:33.614 [2024-06-10 13:45:48.021839] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:17:33.614 [2024-06-10 13:45:48.021898] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.614 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.873 [2024-06-10 13:45:48.150022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:33.873 [2024-06-10 13:45:48.234680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.873 [2024-06-10 13:45:48.234726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.873 [2024-06-10 13:45:48.234739] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.873 [2024-06-10 13:45:48.234751] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.873 [2024-06-10 13:45:48.234761] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.873 [2024-06-10 13:45:48.234813] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.873 [2024-06-10 13:45:48.234819] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.816 [2024-06-10 13:45:48.988611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.816 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.817 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:34.817 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.817 13:45:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 [2024-06-10 13:45:49.008858] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 NULL1 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 Delay0 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1329017 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:34.817 13:45:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:34.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.817 [2024-06-10 13:45:49.089987] subsystem.c:1570:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:36.721 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.721 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.721 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 [2024-06-10 13:45:51.222331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4cad20 is same with the state(5) to be set 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.981 Write completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 Read completed with error (sct=0, sc=8) 00:17:36.981 starting I/O failed: -6 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 starting I/O failed: -6 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 [2024-06-10 13:45:51.223012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff63800c470 is same with the state(5) to be set 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Read completed with error (sct=0, sc=8) 00:17:36.982 Write completed with error (sct=0, sc=8) 00:17:37.969 [2024-06-10 13:45:52.187766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c91a0 is same with the state(5) to be set 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 [2024-06-10 13:45:52.224414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4a8070 is same with the state(5) to be set 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Write completed with error (sct=0, sc=8) 00:17:37.969 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 [2024-06-10 13:45:52.224705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c9c30 is same with the state(5) to be set 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 [2024-06-10 13:45:52.225043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff63800bfe0 is same with the state(5) to be set 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 Write completed with error (sct=0, sc=8) 00:17:37.970 Read completed with error (sct=0, sc=8) 00:17:37.970 [2024-06-10 13:45:52.225207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff63800c780 is same with the state(5) to be set 00:17:37.970 Initializing NVMe Controllers 00:17:37.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.970 Controller IO queue size 128, less than required. 00:17:37.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:37.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:37.970 Initialization complete. Launching workers. 00:17:37.970 ======================================================== 00:17:37.970 Latency(us) 00:17:37.970 Device Information : IOPS MiB/s Average min max 00:17:37.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.60 0.09 886464.52 435.16 1013287.46 00:17:37.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.15 0.08 899800.88 356.15 1013807.42 00:17:37.970 ======================================================== 00:17:37.970 Total : 342.76 0.17 893007.25 356.15 1013807.42 00:17:37.970 00:17:37.970 [2024-06-10 13:45:52.225865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4c91a0 (9): Bad file descriptor 00:17:37.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:37.970 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.970 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:17:37.970 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1329017 00:17:37.970 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1329017 00:17:38.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1329017) - No such process 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1329017 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1329017 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 1329017 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 [2024-06-10 13:45:52.752320] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1329708 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:38.537 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:38.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.537 [2024-06-10 13:45:52.823220] subsystem.c:1570:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:39.103 13:45:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:39.103 13:45:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:39.103 13:45:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:39.362 13:45:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:39.362 13:45:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:39.362 13:45:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:39.929 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:39.929 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:39.929 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:40.497 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:40.497 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:40.497 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:41.065 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:41.065 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:41.065 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:41.632 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:41.632 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:41.632 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:41.891 Initializing NVMe Controllers 00:17:41.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.891 Controller IO queue size 128, less than required. 00:17:41.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:41.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:41.891 Initialization complete. Launching workers. 00:17:41.891 ======================================================== 00:17:41.892 Latency(us) 00:17:41.892 Device Information : IOPS MiB/s Average min max 00:17:41.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003184.41 1000269.10 1010315.61 00:17:41.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006140.90 1000227.36 1043592.58 00:17:41.892 ======================================================== 00:17:41.892 Total : 256.00 0.12 1004662.65 1000227.36 1043592.58 00:17:41.892 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1329708 00:17:41.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1329708) - No such process 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1329708 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.892 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.892 rmmod nvme_tcp 00:17:41.892 rmmod nvme_fabrics 00:17:41.892 rmmod nvme_keyring 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1328883 ']' 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1328883 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 1328883 ']' 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 1328883 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1328883 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1328883' 00:17:42.151 killing process with pid 1328883 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 1328883 00:17:42.151 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 1328883 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.411 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.316 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.316 00:17:44.316 real 0m19.226s 00:17:44.316 user 0m30.458s 00:17:44.316 sys 0m8.235s 00:17:44.316 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:44.316 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:44.316 ************************************ 00:17:44.316 END TEST nvmf_delete_subsystem 00:17:44.316 ************************************ 00:17:44.316 13:45:58 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:44.316 13:45:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:44.316 13:45:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:44.316 13:45:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.575 ************************************ 00:17:44.575 START TEST nvmf_ns_masking 00:17:44.575 ************************************ 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:44.575 * Looking for test storage... 00:17:44.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.575 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=aff0bf9f-dff9-413b-8d78-3353f8b6c3d4 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.576 13:45:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.558 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:54.559 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:54.559 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:54.559 Found net devices under 0000:af:00.0: cvl_0_0 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:54.559 Found net devices under 0000:af:00.1: cvl_0_1 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:17:54.559 00:17:54.559 --- 10.0.0.2 ping statistics --- 00:17:54.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.559 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:17:54.559 00:17:54.559 --- 10.0.0.1 ping statistics --- 00:17:54.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.559 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1334791 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1334791 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 1334791 ']' 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.559 13:46:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:54.559 [2024-06-10 13:46:07.657699] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:17:54.559 [2024-06-10 13:46:07.657762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.559 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.559 [2024-06-10 13:46:07.786801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.559 [2024-06-10 13:46:07.874643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.559 [2024-06-10 13:46:07.874689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.559 [2024-06-10 13:46:07.874702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.559 [2024-06-10 13:46:07.874715] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.559 [2024-06-10 13:46:07.874725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.559 [2024-06-10 13:46:07.874785] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.559 [2024-06-10 13:46:07.874898] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.559 [2024-06-10 13:46:07.875012] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.559 [2024-06-10 13:46:07.875013] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.559 13:46:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:54.559 [2024-06-10 13:46:08.766000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.560 13:46:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:17:54.560 13:46:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:17:54.560 13:46:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:54.560 Malloc1 00:17:54.819 13:46:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:54.819 Malloc2 00:17:55.078 13:46:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:55.078 13:46:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:55.337 13:46:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.595 [2024-06-10 13:46:09.980426] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.595 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:17:55.595 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aff0bf9f-dff9-413b-8d78-3353f8b6c3d4 -a 10.0.0.2 -s 4420 -i 4 00:17:55.854 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:17:55.854 13:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:17:55.854 13:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.854 13:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:17:55.854 13:46:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:17:57.758 13:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:57.758 13:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:57.758 13:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:58.016 [ 0]:0x1 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f1083e95272846dd91f1cdbd1bef50b0 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f1083e95272846dd91f1cdbd1bef50b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.016 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:58.274 [ 0]:0x1 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f1083e95272846dd91f1cdbd1bef50b0 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f1083e95272846dd91f1cdbd1bef50b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:58.274 [ 1]:0x2 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:17:58.274 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.533 13:46:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.792 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aff0bf9f-dff9-413b-8d78-3353f8b6c3d4 -a 10.0.0.2 -s 4420 -i 4 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:17:59.051 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:01.584 [ 0]:0x2 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:01.584 [ 0]:0x1 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.584 13:46:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f1083e95272846dd91f1cdbd1bef50b0 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f1083e95272846dd91f1cdbd1bef50b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:01.584 [ 1]:0x2 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.584 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.843 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:18:01.843 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.843 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:02.102 [ 0]:0x2 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.102 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:02.361 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:18:02.361 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aff0bf9f-dff9-413b-8d78-3353f8b6c3d4 -a 10.0.0.2 -s 4420 -i 4 00:18:02.620 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:02.620 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:02.620 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.620 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:18:02.620 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:18:02.620 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:04.526 [ 0]:0x1 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f1083e95272846dd91f1cdbd1bef50b0 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f1083e95272846dd91f1cdbd1bef50b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:04.526 [ 1]:0x2 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.526 13:46:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:04.785 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:05.044 [ 0]:0x2 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:05.044 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:05.303 [2024-06-10 13:46:19.626356] nvmf_rpc.c:1793:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:05.303 request: 00:18:05.303 { 00:18:05.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.303 "nsid": 2, 00:18:05.303 "host": "nqn.2016-06.io.spdk:host1", 00:18:05.303 "method": "nvmf_ns_remove_host", 00:18:05.303 "req_id": 1 00:18:05.303 } 00:18:05.303 Got JSON-RPC error response 00:18:05.303 response: 00:18:05.303 { 00:18:05.303 "code": -32602, 00:18:05.303 "message": "Invalid parameters" 00:18:05.303 } 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:05.303 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:05.562 [ 0]:0x2 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0a59151e2719456eae2f3578461f1207 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0a59151e2719456eae2f3578461f1207 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.562 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.820 rmmod nvme_tcp 00:18:05.820 rmmod nvme_fabrics 00:18:05.820 rmmod nvme_keyring 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1334791 ']' 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1334791 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 1334791 ']' 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 1334791 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:05.820 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1334791 00:18:06.079 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:06.079 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:06.079 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1334791' 00:18:06.079 killing process with pid 1334791 00:18:06.079 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 1334791 00:18:06.079 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 1334791 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.337 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.344 13:46:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.344 00:18:08.344 real 0m23.845s 00:18:08.344 user 0m55.230s 00:18:08.344 sys 0m9.538s 00:18:08.344 13:46:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:08.344 13:46:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 ************************************ 00:18:08.344 END TEST nvmf_ns_masking 00:18:08.344 ************************************ 00:18:08.344 13:46:22 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:18:08.344 13:46:22 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:08.344 13:46:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:08.344 13:46:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:08.344 13:46:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 ************************************ 00:18:08.344 START TEST nvmf_nvme_cli 00:18:08.344 ************************************ 00:18:08.344 13:46:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:08.603 * Looking for test storage... 00:18:08.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.603 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.604 13:46:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:16.721 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:16.721 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:16.721 Found net devices under 0000:af:00.0: cvl_0_0 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:16.721 Found net devices under 0000:af:00.1: cvl_0_1 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.721 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.722 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.722 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:18:16.722 00:18:16.722 --- 10.0.0.2 ping statistics --- 00:18:16.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.722 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:16.722 00:18:16.722 --- 10.0.0.1 ping statistics --- 00:18:16.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.722 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.722 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1341530 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1341530 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 1341530 ']' 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:16.981 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:16.981 [2024-06-10 13:46:31.282434] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:18:16.981 [2024-06-10 13:46:31.282505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.981 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.981 [2024-06-10 13:46:31.412077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.239 [2024-06-10 13:46:31.498638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.239 [2024-06-10 13:46:31.498685] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.239 [2024-06-10 13:46:31.498698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.239 [2024-06-10 13:46:31.498711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.239 [2024-06-10 13:46:31.498721] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.239 [2024-06-10 13:46:31.498781] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.239 [2024-06-10 13:46:31.498874] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.239 [2024-06-10 13:46:31.498986] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.239 [2024-06-10 13:46:31.498986] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.807 [2024-06-10 13:46:32.244704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.807 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.808 Malloc0 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.808 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:18.067 Malloc1 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:18.067 [2024-06-10 13:46:32.327253] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:18.067 00:18:18.067 Discovery Log Number of Records 2, Generation counter 2 00:18:18.067 =====Discovery Log Entry 0====== 00:18:18.067 trtype: tcp 00:18:18.067 adrfam: ipv4 00:18:18.067 subtype: current discovery subsystem 00:18:18.067 treq: not required 00:18:18.067 portid: 0 00:18:18.067 trsvcid: 4420 00:18:18.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:18.067 traddr: 10.0.0.2 00:18:18.067 eflags: explicit discovery connections, duplicate discovery information 00:18:18.067 sectype: none 00:18:18.067 =====Discovery Log Entry 1====== 00:18:18.067 trtype: tcp 00:18:18.067 adrfam: ipv4 00:18:18.067 subtype: nvme subsystem 00:18:18.067 treq: not required 00:18:18.067 portid: 0 00:18:18.067 trsvcid: 4420 00:18:18.067 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:18.067 traddr: 10.0.0.2 00:18:18.067 eflags: none 00:18:18.067 sectype: none 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:18.067 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:19.443 13:46:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:19.443 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:18:19.443 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.443 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:18:19.443 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:18:19.443 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:18:21.347 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:21.347 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:21.347 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.606 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:21.606 /dev/nvme0n1 ]] 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.606 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:21.865 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.124 rmmod nvme_tcp 00:18:22.124 rmmod nvme_fabrics 00:18:22.124 rmmod nvme_keyring 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1341530 ']' 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1341530 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 1341530 ']' 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 1341530 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:22.124 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1341530 00:18:22.383 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:22.383 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:22.383 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1341530' 00:18:22.383 killing process with pid 1341530 00:18:22.383 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 1341530 00:18:22.383 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 1341530 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.642 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.546 13:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:24.546 00:18:24.546 real 0m16.214s 00:18:24.546 user 0m23.486s 00:18:24.546 sys 0m7.220s 00:18:24.546 13:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:24.546 13:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 ************************************ 00:18:24.546 END TEST nvmf_nvme_cli 00:18:24.546 ************************************ 00:18:24.546 13:46:38 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:18:24.546 13:46:38 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:24.546 13:46:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:24.546 13:46:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:24.546 13:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 ************************************ 00:18:24.805 START TEST nvmf_vfio_user 00:18:24.805 ************************************ 00:18:24.805 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:24.805 * Looking for test storage... 00:18:24.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1343016 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1343016' 00:18:24.806 Process pid: 1343016 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1343016 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1343016 ']' 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:24.806 13:46:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:24.806 [2024-06-10 13:46:39.244529] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:18:24.806 [2024-06-10 13:46:39.244602] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.065 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.065 [2024-06-10 13:46:39.367822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.065 [2024-06-10 13:46:39.453392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.065 [2024-06-10 13:46:39.453438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.065 [2024-06-10 13:46:39.453452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.065 [2024-06-10 13:46:39.453464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.065 [2024-06-10 13:46:39.453474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.065 [2024-06-10 13:46:39.453528] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.065 [2024-06-10 13:46:39.453611] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.065 [2024-06-10 13:46:39.453687] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.065 [2024-06-10 13:46:39.453688] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.999 13:46:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:25.999 13:46:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:18:25.999 13:46:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:26.935 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:26.935 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:26.935 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:26.935 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:26.935 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:26.935 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:27.194 Malloc1 00:18:27.194 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:27.453 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:27.453 13:46:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:27.711 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:27.711 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:27.711 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:27.970 Malloc2 00:18:27.970 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:28.228 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:28.487 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:28.748 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:28.748 13:46:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:28.748 13:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:28.748 13:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:28.748 13:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:28.748 13:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:28.748 [2024-06-10 13:46:43.028827] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:18:28.748 [2024-06-10 13:46:43.028865] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343707 ] 00:18:28.748 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.748 [2024-06-10 13:46:43.063103] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:28.748 [2024-06-10 13:46:43.072021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:28.748 [2024-06-10 13:46:43.072048] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8fd3dea000 00:18:28.748 [2024-06-10 13:46:43.073024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.074022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.075026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.076034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.077033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.078035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.079042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.080045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:28.748 [2024-06-10 13:46:43.081055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:28.748 [2024-06-10 13:46:43.081073] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8fd3ddf000 00:18:28.748 [2024-06-10 13:46:43.082320] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:28.748 [2024-06-10 13:46:43.098733] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:28.748 [2024-06-10 13:46:43.098768] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:28.748 [2024-06-10 13:46:43.104198] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:28.748 [2024-06-10 13:46:43.104250] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:28.748 [2024-06-10 13:46:43.104347] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:28.748 [2024-06-10 13:46:43.104373] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:28.748 [2024-06-10 13:46:43.104383] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:28.748 [2024-06-10 13:46:43.105202] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:28.748 [2024-06-10 13:46:43.105216] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:28.748 [2024-06-10 13:46:43.105228] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:28.748 [2024-06-10 13:46:43.106202] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:28.748 [2024-06-10 13:46:43.106215] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:28.748 [2024-06-10 13:46:43.106227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:28.748 [2024-06-10 13:46:43.107209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:28.748 [2024-06-10 13:46:43.107223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:28.748 [2024-06-10 13:46:43.108215] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:28.748 [2024-06-10 13:46:43.108228] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:28.748 [2024-06-10 13:46:43.108240] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:28.748 [2024-06-10 13:46:43.108251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:28.748 [2024-06-10 13:46:43.108361] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:28.748 [2024-06-10 13:46:43.108369] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:28.748 [2024-06-10 13:46:43.108378] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:28.748 [2024-06-10 13:46:43.109224] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:28.748 [2024-06-10 13:46:43.110229] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:28.748 [2024-06-10 13:46:43.111235] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:28.748 [2024-06-10 13:46:43.112232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:28.749 [2024-06-10 13:46:43.112317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:28.749 [2024-06-10 13:46:43.113241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:28.749 [2024-06-10 13:46:43.113253] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:28.749 [2024-06-10 13:46:43.113262] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113288] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:28.749 [2024-06-10 13:46:43.113301] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113324] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:28.749 [2024-06-10 13:46:43.113333] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:28.749 [2024-06-10 13:46:43.113352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.113401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.113415] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:28.749 [2024-06-10 13:46:43.113424] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:28.749 [2024-06-10 13:46:43.113432] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:28.749 [2024-06-10 13:46:43.113445] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:28.749 [2024-06-10 13:46:43.113454] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:28.749 [2024-06-10 13:46:43.113462] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:28.749 [2024-06-10 13:46:43.113470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113485] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.113514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.113529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.749 [2024-06-10 13:46:43.113542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.749 [2024-06-10 13:46:43.113554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.749 [2024-06-10 13:46:43.113566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.749 [2024-06-10 13:46:43.113579] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113594] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.113618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.113628] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:28.749 [2024-06-10 13:46:43.113637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113648] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113657] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.113687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.113838] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113851] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113863] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:28.749 [2024-06-10 13:46:43.113871] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:28.749 [2024-06-10 13:46:43.113880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.113897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.113912] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:28.749 [2024-06-10 13:46:43.113933] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113946] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.113957] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:28.749 [2024-06-10 13:46:43.113965] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:28.749 [2024-06-10 13:46:43.113974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.113997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.114015] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114027] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114038] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:28.749 [2024-06-10 13:46:43.114046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:28.749 [2024-06-10 13:46:43.114055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.114082] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114093] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114115] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114124] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114132] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:28.749 [2024-06-10 13:46:43.114141] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:28.749 [2024-06-10 13:46:43.114149] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:28.749 [2024-06-10 13:46:43.114176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.114209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.114241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.114275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:28.749 [2024-06-10 13:46:43.114307] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:28.749 [2024-06-10 13:46:43.114315] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:28.749 [2024-06-10 13:46:43.114322] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:28.749 [2024-06-10 13:46:43.114328] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:28.749 [2024-06-10 13:46:43.114337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:28.749 [2024-06-10 13:46:43.114348] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:28.749 [2024-06-10 13:46:43.114356] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:28.749 [2024-06-10 13:46:43.114365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114376] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:28.749 [2024-06-10 13:46:43.114384] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:28.749 [2024-06-10 13:46:43.114393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:28.749 [2024-06-10 13:46:43.114404] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:28.750 [2024-06-10 13:46:43.114412] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:28.750 [2024-06-10 13:46:43.114421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:28.750 [2024-06-10 13:46:43.114432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:28.750 [2024-06-10 13:46:43.114450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:28.750 [2024-06-10 13:46:43.114465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:28.750 [2024-06-10 13:46:43.114480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:28.750 ===================================================== 00:18:28.750 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:28.750 ===================================================== 00:18:28.750 Controller Capabilities/Features 00:18:28.750 ================================ 00:18:28.750 Vendor ID: 4e58 00:18:28.750 Subsystem Vendor ID: 4e58 00:18:28.750 Serial Number: SPDK1 00:18:28.750 Model Number: SPDK bdev Controller 00:18:28.750 Firmware Version: 24.09 00:18:28.750 Recommended Arb Burst: 6 00:18:28.750 IEEE OUI Identifier: 8d 6b 50 00:18:28.750 Multi-path I/O 00:18:28.750 May have multiple subsystem ports: Yes 00:18:28.750 May have multiple controllers: Yes 00:18:28.750 Associated with SR-IOV VF: No 00:18:28.750 Max Data Transfer Size: 131072 00:18:28.750 Max Number of Namespaces: 32 00:18:28.750 Max Number of I/O Queues: 127 00:18:28.750 NVMe Specification Version (VS): 1.3 00:18:28.750 NVMe Specification Version (Identify): 1.3 00:18:28.750 Maximum Queue Entries: 256 00:18:28.750 Contiguous Queues Required: Yes 00:18:28.750 Arbitration Mechanisms Supported 00:18:28.750 Weighted Round Robin: Not Supported 00:18:28.750 Vendor Specific: Not Supported 00:18:28.750 Reset Timeout: 15000 ms 00:18:28.750 Doorbell Stride: 4 bytes 00:18:28.750 NVM Subsystem Reset: Not Supported 00:18:28.750 Command Sets Supported 00:18:28.750 NVM Command Set: Supported 00:18:28.750 Boot Partition: Not Supported 00:18:28.750 Memory Page Size Minimum: 4096 bytes 00:18:28.750 Memory Page Size Maximum: 4096 bytes 00:18:28.750 Persistent Memory Region: Not Supported 00:18:28.750 Optional Asynchronous Events Supported 00:18:28.750 Namespace Attribute Notices: Supported 00:18:28.750 Firmware Activation Notices: Not Supported 00:18:28.750 ANA Change Notices: Not Supported 00:18:28.750 PLE Aggregate Log Change Notices: Not Supported 00:18:28.750 LBA Status Info Alert Notices: Not Supported 00:18:28.750 EGE Aggregate Log Change Notices: Not Supported 00:18:28.750 Normal NVM Subsystem Shutdown event: Not Supported 00:18:28.750 Zone Descriptor Change Notices: Not Supported 00:18:28.750 Discovery Log Change Notices: Not Supported 00:18:28.750 Controller Attributes 00:18:28.750 128-bit Host Identifier: Supported 00:18:28.750 Non-Operational Permissive Mode: Not Supported 00:18:28.750 NVM Sets: Not Supported 00:18:28.750 Read Recovery Levels: Not Supported 00:18:28.750 Endurance Groups: Not Supported 00:18:28.750 Predictable Latency Mode: Not Supported 00:18:28.750 Traffic Based Keep ALive: Not Supported 00:18:28.750 Namespace Granularity: Not Supported 00:18:28.750 SQ Associations: Not Supported 00:18:28.750 UUID List: Not Supported 00:18:28.750 Multi-Domain Subsystem: Not Supported 00:18:28.750 Fixed Capacity Management: Not Supported 00:18:28.750 Variable Capacity Management: Not Supported 00:18:28.750 Delete Endurance Group: Not Supported 00:18:28.750 Delete NVM Set: Not Supported 00:18:28.750 Extended LBA Formats Supported: Not Supported 00:18:28.750 Flexible Data Placement Supported: Not Supported 00:18:28.750 00:18:28.750 Controller Memory Buffer Support 00:18:28.750 ================================ 00:18:28.750 Supported: No 00:18:28.750 00:18:28.750 Persistent Memory Region Support 00:18:28.750 ================================ 00:18:28.750 Supported: No 00:18:28.750 00:18:28.750 Admin Command Set Attributes 00:18:28.750 ============================ 00:18:28.750 Security Send/Receive: Not Supported 00:18:28.750 Format NVM: Not Supported 00:18:28.750 Firmware Activate/Download: Not Supported 00:18:28.750 Namespace Management: Not Supported 00:18:28.750 Device Self-Test: Not Supported 00:18:28.750 Directives: Not Supported 00:18:28.750 NVMe-MI: Not Supported 00:18:28.750 Virtualization Management: Not Supported 00:18:28.750 Doorbell Buffer Config: Not Supported 00:18:28.750 Get LBA Status Capability: Not Supported 00:18:28.750 Command & Feature Lockdown Capability: Not Supported 00:18:28.750 Abort Command Limit: 4 00:18:28.750 Async Event Request Limit: 4 00:18:28.750 Number of Firmware Slots: N/A 00:18:28.750 Firmware Slot 1 Read-Only: N/A 00:18:28.750 Firmware Activation Without Reset: N/A 00:18:28.750 Multiple Update Detection Support: N/A 00:18:28.750 Firmware Update Granularity: No Information Provided 00:18:28.750 Per-Namespace SMART Log: No 00:18:28.750 Asymmetric Namespace Access Log Page: Not Supported 00:18:28.750 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:28.750 Command Effects Log Page: Supported 00:18:28.750 Get Log Page Extended Data: Supported 00:18:28.750 Telemetry Log Pages: Not Supported 00:18:28.750 Persistent Event Log Pages: Not Supported 00:18:28.750 Supported Log Pages Log Page: May Support 00:18:28.750 Commands Supported & Effects Log Page: Not Supported 00:18:28.750 Feature Identifiers & Effects Log Page:May Support 00:18:28.750 NVMe-MI Commands & Effects Log Page: May Support 00:18:28.750 Data Area 4 for Telemetry Log: Not Supported 00:18:28.750 Error Log Page Entries Supported: 128 00:18:28.750 Keep Alive: Supported 00:18:28.750 Keep Alive Granularity: 10000 ms 00:18:28.750 00:18:28.750 NVM Command Set Attributes 00:18:28.750 ========================== 00:18:28.750 Submission Queue Entry Size 00:18:28.750 Max: 64 00:18:28.750 Min: 64 00:18:28.750 Completion Queue Entry Size 00:18:28.750 Max: 16 00:18:28.750 Min: 16 00:18:28.750 Number of Namespaces: 32 00:18:28.750 Compare Command: Supported 00:18:28.750 Write Uncorrectable Command: Not Supported 00:18:28.750 Dataset Management Command: Supported 00:18:28.750 Write Zeroes Command: Supported 00:18:28.750 Set Features Save Field: Not Supported 00:18:28.750 Reservations: Not Supported 00:18:28.750 Timestamp: Not Supported 00:18:28.750 Copy: Supported 00:18:28.750 Volatile Write Cache: Present 00:18:28.750 Atomic Write Unit (Normal): 1 00:18:28.750 Atomic Write Unit (PFail): 1 00:18:28.750 Atomic Compare & Write Unit: 1 00:18:28.750 Fused Compare & Write: Supported 00:18:28.750 Scatter-Gather List 00:18:28.750 SGL Command Set: Supported (Dword aligned) 00:18:28.750 SGL Keyed: Not Supported 00:18:28.750 SGL Bit Bucket Descriptor: Not Supported 00:18:28.750 SGL Metadata Pointer: Not Supported 00:18:28.750 Oversized SGL: Not Supported 00:18:28.750 SGL Metadata Address: Not Supported 00:18:28.750 SGL Offset: Not Supported 00:18:28.750 Transport SGL Data Block: Not Supported 00:18:28.750 Replay Protected Memory Block: Not Supported 00:18:28.750 00:18:28.750 Firmware Slot Information 00:18:28.750 ========================= 00:18:28.750 Active slot: 1 00:18:28.750 Slot 1 Firmware Revision: 24.09 00:18:28.750 00:18:28.750 00:18:28.750 Commands Supported and Effects 00:18:28.750 ============================== 00:18:28.750 Admin Commands 00:18:28.750 -------------- 00:18:28.750 Get Log Page (02h): Supported 00:18:28.750 Identify (06h): Supported 00:18:28.750 Abort (08h): Supported 00:18:28.750 Set Features (09h): Supported 00:18:28.750 Get Features (0Ah): Supported 00:18:28.750 Asynchronous Event Request (0Ch): Supported 00:18:28.750 Keep Alive (18h): Supported 00:18:28.750 I/O Commands 00:18:28.750 ------------ 00:18:28.750 Flush (00h): Supported LBA-Change 00:18:28.750 Write (01h): Supported LBA-Change 00:18:28.750 Read (02h): Supported 00:18:28.750 Compare (05h): Supported 00:18:28.750 Write Zeroes (08h): Supported LBA-Change 00:18:28.750 Dataset Management (09h): Supported LBA-Change 00:18:28.750 Copy (19h): Supported LBA-Change 00:18:28.750 Unknown (79h): Supported LBA-Change 00:18:28.750 Unknown (7Ah): Supported 00:18:28.750 00:18:28.750 Error Log 00:18:28.750 ========= 00:18:28.750 00:18:28.750 Arbitration 00:18:28.750 =========== 00:18:28.751 Arbitration Burst: 1 00:18:28.751 00:18:28.751 Power Management 00:18:28.751 ================ 00:18:28.751 Number of Power States: 1 00:18:28.751 Current Power State: Power State #0 00:18:28.751 Power State #0: 00:18:28.751 Max Power: 0.00 W 00:18:28.751 Non-Operational State: Operational 00:18:28.751 Entry Latency: Not Reported 00:18:28.751 Exit Latency: Not Reported 00:18:28.751 Relative Read Throughput: 0 00:18:28.751 Relative Read Latency: 0 00:18:28.751 Relative Write Throughput: 0 00:18:28.751 Relative Write Latency: 0 00:18:28.751 Idle Power: Not Reported 00:18:28.751 Active Power: Not Reported 00:18:28.751 Non-Operational Permissive Mode: Not Supported 00:18:28.751 00:18:28.751 Health Information 00:18:28.751 ================== 00:18:28.751 Critical Warnings: 00:18:28.751 Available Spare Space: OK 00:18:28.751 Temperature: OK 00:18:28.751 Device Reliability: OK 00:18:28.751 Read Only: No 00:18:28.751 Volatile Memory Backup: OK 00:18:28.751 Current Temperature: 0 Kelvin (-2[2024-06-10 13:46:43.114610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:28.751 [2024-06-10 13:46:43.114622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:28.751 [2024-06-10 13:46:43.114660] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:28.751 [2024-06-10 13:46:43.114674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.751 [2024-06-10 13:46:43.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.751 [2024-06-10 13:46:43.114695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.751 [2024-06-10 13:46:43.114706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.751 [2024-06-10 13:46:43.115271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:28.751 [2024-06-10 13:46:43.115287] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:28.751 [2024-06-10 13:46:43.116269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:28.751 [2024-06-10 13:46:43.116331] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:28.751 [2024-06-10 13:46:43.116343] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:28.751 [2024-06-10 13:46:43.117274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:28.751 [2024-06-10 13:46:43.117290] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:28.751 [2024-06-10 13:46:43.117347] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:28.751 [2024-06-10 13:46:43.122585] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:28.751 73 Celsius) 00:18:28.751 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:28.751 Available Spare: 0% 00:18:28.751 Available Spare Threshold: 0% 00:18:28.751 Life Percentage Used: 0% 00:18:28.751 Data Units Read: 0 00:18:28.751 Data Units Written: 0 00:18:28.751 Host Read Commands: 0 00:18:28.751 Host Write Commands: 0 00:18:28.751 Controller Busy Time: 0 minutes 00:18:28.751 Power Cycles: 0 00:18:28.751 Power On Hours: 0 hours 00:18:28.751 Unsafe Shutdowns: 0 00:18:28.751 Unrecoverable Media Errors: 0 00:18:28.751 Lifetime Error Log Entries: 0 00:18:28.751 Warning Temperature Time: 0 minutes 00:18:28.751 Critical Temperature Time: 0 minutes 00:18:28.751 00:18:28.751 Number of Queues 00:18:28.751 ================ 00:18:28.751 Number of I/O Submission Queues: 127 00:18:28.751 Number of I/O Completion Queues: 127 00:18:28.751 00:18:28.751 Active Namespaces 00:18:28.751 ================= 00:18:28.751 Namespace ID:1 00:18:28.751 Error Recovery Timeout: Unlimited 00:18:28.751 Command Set Identifier: NVM (00h) 00:18:28.751 Deallocate: Supported 00:18:28.751 Deallocated/Unwritten Error: Not Supported 00:18:28.751 Deallocated Read Value: Unknown 00:18:28.751 Deallocate in Write Zeroes: Not Supported 00:18:28.751 Deallocated Guard Field: 0xFFFF 00:18:28.751 Flush: Supported 00:18:28.751 Reservation: Supported 00:18:28.751 Namespace Sharing Capabilities: Multiple Controllers 00:18:28.751 Size (in LBAs): 131072 (0GiB) 00:18:28.751 Capacity (in LBAs): 131072 (0GiB) 00:18:28.751 Utilization (in LBAs): 131072 (0GiB) 00:18:28.751 NGUID: 078BD6382C9E4FA29D145FC0D330D4D5 00:18:28.751 UUID: 078bd638-2c9e-4fa2-9d14-5fc0d330d4d5 00:18:28.751 Thin Provisioning: Not Supported 00:18:28.751 Per-NS Atomic Units: Yes 00:18:28.751 Atomic Boundary Size (Normal): 0 00:18:28.751 Atomic Boundary Size (PFail): 0 00:18:28.751 Atomic Boundary Offset: 0 00:18:28.751 Maximum Single Source Range Length: 65535 00:18:28.751 Maximum Copy Length: 65535 00:18:28.751 Maximum Source Range Count: 1 00:18:28.751 NGUID/EUI64 Never Reused: No 00:18:28.751 Namespace Write Protected: No 00:18:28.751 Number of LBA Formats: 1 00:18:28.751 Current LBA Format: LBA Format #00 00:18:28.751 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:28.751 00:18:28.751 13:46:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:28.751 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.010 [2024-06-10 13:46:43.364447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.278 Initializing NVMe Controllers 00:18:34.278 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:34.278 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:34.278 Initialization complete. Launching workers. 00:18:34.278 ======================================================== 00:18:34.278 Latency(us) 00:18:34.278 Device Information : IOPS MiB/s Average min max 00:18:34.278 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31841.16 124.38 4019.18 1251.48 8214.82 00:18:34.278 ======================================================== 00:18:34.278 Total : 31841.16 124.38 4019.18 1251.48 8214.82 00:18:34.278 00:18:34.278 [2024-06-10 13:46:48.384719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.278 13:46:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:34.278 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.278 [2024-06-10 13:46:48.639001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:39.549 Initializing NVMe Controllers 00:18:39.549 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:39.549 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:39.549 Initialization complete. Launching workers. 00:18:39.549 ======================================================== 00:18:39.549 Latency(us) 00:18:39.549 Device Information : IOPS MiB/s Average min max 00:18:39.549 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16040.42 62.66 7984.97 7729.04 15488.27 00:18:39.549 ======================================================== 00:18:39.549 Total : 16040.42 62.66 7984.97 7729.04 15488.27 00:18:39.549 00:18:39.549 [2024-06-10 13:46:53.681538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:39.549 13:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:39.549 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.549 [2024-06-10 13:46:53.982969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:44.916 [2024-06-10 13:46:59.052854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:44.916 Initializing NVMe Controllers 00:18:44.917 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:44.917 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:44.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:44.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:44.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:44.917 Initialization complete. Launching workers. 00:18:44.917 Starting thread on core 2 00:18:44.917 Starting thread on core 3 00:18:44.917 Starting thread on core 1 00:18:44.917 13:46:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:44.917 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.176 [2024-06-10 13:46:59.441026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:48.467 [2024-06-10 13:47:02.503782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:48.467 Initializing NVMe Controllers 00:18:48.467 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:48.467 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:48.467 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:48.467 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:48.467 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:48.467 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:48.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:48.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:48.467 Initialization complete. Launching workers. 00:18:48.467 Starting thread on core 1 with urgent priority queue 00:18:48.467 Starting thread on core 2 with urgent priority queue 00:18:48.467 Starting thread on core 3 with urgent priority queue 00:18:48.467 Starting thread on core 0 with urgent priority queue 00:18:48.467 SPDK bdev Controller (SPDK1 ) core 0: 9086.00 IO/s 11.01 secs/100000 ios 00:18:48.467 SPDK bdev Controller (SPDK1 ) core 1: 7359.33 IO/s 13.59 secs/100000 ios 00:18:48.467 SPDK bdev Controller (SPDK1 ) core 2: 9276.00 IO/s 10.78 secs/100000 ios 00:18:48.467 SPDK bdev Controller (SPDK1 ) core 3: 8081.33 IO/s 12.37 secs/100000 ios 00:18:48.467 ======================================================== 00:18:48.467 00:18:48.467 13:47:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:48.467 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.467 [2024-06-10 13:47:02.877168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:48.467 Initializing NVMe Controllers 00:18:48.467 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:48.467 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:48.467 Namespace ID: 1 size: 0GB 00:18:48.467 Initialization complete. 00:18:48.467 INFO: using host memory buffer for IO 00:18:48.467 Hello world! 00:18:48.468 [2024-06-10 13:47:02.910849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:48.727 13:47:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:48.727 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.987 [2024-06-10 13:47:03.278289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:49.925 Initializing NVMe Controllers 00:18:49.926 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:49.926 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:49.926 Initialization complete. Launching workers. 00:18:49.926 submit (in ns) avg, min, max = 8420.0, 4094.4, 4003756.0 00:18:49.926 complete (in ns) avg, min, max = 19739.2, 2386.4, 4171112.0 00:18:49.926 00:18:49.926 Submit histogram 00:18:49.926 ================ 00:18:49.926 Range in us Cumulative Count 00:18:49.926 4.070 - 4.096: 0.0060% ( 1) 00:18:49.926 4.096 - 4.122: 0.0300% ( 4) 00:18:49.926 4.122 - 4.147: 0.8225% ( 132) 00:18:49.926 4.147 - 4.173: 4.7307% ( 651) 00:18:49.926 4.173 - 4.198: 11.2685% ( 1089) 00:18:49.926 4.198 - 4.224: 22.1168% ( 1807) 00:18:49.926 4.224 - 4.250: 31.4342% ( 1552) 00:18:49.926 4.250 - 4.275: 38.5664% ( 1188) 00:18:49.926 4.275 - 4.301: 45.5124% ( 1157) 00:18:49.926 4.301 - 4.326: 52.4884% ( 1162) 00:18:49.926 4.326 - 4.352: 65.7441% ( 2208) 00:18:49.926 4.352 - 4.378: 77.2948% ( 1924) 00:18:49.926 4.378 - 4.403: 83.5024% ( 1034) 00:18:49.926 4.403 - 4.429: 86.4742% ( 495) 00:18:49.926 4.429 - 4.454: 87.4407% ( 161) 00:18:49.926 4.454 - 4.480: 88.0771% ( 106) 00:18:49.926 4.480 - 4.506: 89.1697% ( 182) 00:18:49.926 4.506 - 4.531: 90.5625% ( 232) 00:18:49.926 4.531 - 4.557: 91.7932% ( 205) 00:18:49.926 4.557 - 4.582: 92.9879% ( 199) 00:18:49.926 4.582 - 4.608: 94.8130% ( 304) 00:18:49.926 4.608 - 4.634: 96.6501% ( 306) 00:18:49.926 4.634 - 4.659: 97.8207% ( 195) 00:18:49.926 4.659 - 4.685: 98.4931% ( 112) 00:18:49.926 4.685 - 4.710: 98.9314% ( 73) 00:18:49.926 4.710 - 4.736: 99.2496% ( 53) 00:18:49.926 4.736 - 4.762: 99.4297% ( 30) 00:18:49.926 4.762 - 4.787: 99.4957% ( 11) 00:18:49.926 4.787 - 4.813: 99.5437% ( 8) 00:18:49.926 4.813 - 4.838: 99.5557% ( 2) 00:18:49.926 4.941 - 4.966: 99.5617% ( 1) 00:18:49.926 6.400 - 6.426: 99.5677% ( 1) 00:18:49.926 7.168 - 7.219: 99.5738% ( 1) 00:18:49.926 7.731 - 7.782: 99.5798% ( 1) 00:18:49.926 7.782 - 7.834: 99.5858% ( 1) 00:18:49.926 7.834 - 7.885: 99.5918% ( 1) 00:18:49.926 7.885 - 7.936: 99.6098% ( 3) 00:18:49.926 7.936 - 7.987: 99.6158% ( 1) 00:18:49.926 8.038 - 8.090: 99.6218% ( 1) 00:18:49.926 8.090 - 8.141: 99.6278% ( 1) 00:18:49.926 8.192 - 8.243: 99.6338% ( 1) 00:18:49.926 8.243 - 8.294: 99.6398% ( 1) 00:18:49.926 8.294 - 8.346: 99.6458% ( 1) 00:18:49.926 8.346 - 8.397: 99.6638% ( 3) 00:18:49.926 8.397 - 8.448: 99.6698% ( 1) 00:18:49.926 8.448 - 8.499: 99.6758% ( 1) 00:18:49.926 8.550 - 8.602: 99.6878% ( 2) 00:18:49.926 8.602 - 8.653: 99.6998% ( 2) 00:18:49.926 8.653 - 8.704: 99.7058% ( 1) 00:18:49.926 8.909 - 8.960: 99.7238% ( 3) 00:18:49.926 9.011 - 9.062: 99.7419% ( 3) 00:18:49.926 9.114 - 9.165: 99.7479% ( 1) 00:18:49.926 9.165 - 9.216: 99.7539% ( 1) 00:18:49.926 9.216 - 9.267: 99.7659% ( 2) 00:18:49.926 9.267 - 9.318: 99.7719% ( 1) 00:18:49.926 9.370 - 9.421: 99.7779% ( 1) 00:18:49.926 9.421 - 9.472: 99.7839% ( 1) 00:18:49.926 9.472 - 9.523: 99.7899% ( 1) 00:18:49.926 9.574 - 9.626: 99.7959% ( 1) 00:18:49.926 9.626 - 9.677: 99.8019% ( 1) 00:18:49.926 9.677 - 9.728: 99.8079% ( 1) 00:18:49.926 9.779 - 9.830: 99.8199% ( 2) 00:18:49.926 9.882 - 9.933: 99.8379% ( 3) 00:18:49.926 10.445 - 10.496: 99.8439% ( 1) 00:18:49.926 10.496 - 10.547: 99.8559% ( 2) 00:18:49.926 10.752 - 10.803: 99.8619% ( 1) 00:18:49.926 10.854 - 10.906: 99.8679% ( 1) 00:18:49.926 10.906 - 10.957: 99.8739% ( 1) 00:18:49.926 10.957 - 11.008: 99.8799% ( 1) 00:18:49.926 11.366 - 11.418: 99.8859% ( 1) 00:18:49.926 13.210 - 13.312: 99.8919% ( 1) 00:18:49.926 14.234 - 14.336: 99.8979% ( 1) 00:18:49.926 3984.589 - 4010.803: 100.0000% ( 17) 00:18:49.926 00:18:49.926 Complete histogram 00:18:49.926 ================== 00:18:49.926 Range in us Cumulative Count 00:18:49.926 2.381 - 2.394: 0.0840% ( 14) 00:18:49.926 2.394 - 2.406: 5.0849% ( 833) 00:18:49.926 2.406 - 2.419: 32.3348% ( 4539) 00:18:49.926 2.419 - 2.432: 56.4267% ( 4013) 00:18:49.926 2.432 - [2024-06-10 13:47:04.299173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:49.926 2.445: 62.2801% ( 975) 00:18:49.926 2.445 - 2.458: 70.4929% ( 1368) 00:18:49.926 2.458 - 2.470: 83.7726% ( 2212) 00:18:49.926 2.470 - 2.483: 90.9047% ( 1188) 00:18:49.926 2.483 - 2.496: 93.9965% ( 515) 00:18:49.926 2.496 - 2.509: 96.7161% ( 453) 00:18:49.926 2.509 - 2.522: 97.9528% ( 206) 00:18:49.926 2.522 - 2.534: 98.4391% ( 81) 00:18:49.926 2.534 - 2.547: 98.7813% ( 57) 00:18:49.926 2.547 - 2.560: 99.0875% ( 51) 00:18:49.926 2.560 - 2.573: 99.1835% ( 16) 00:18:49.926 2.573 - 2.586: 99.2015% ( 3) 00:18:49.926 2.586 - 2.598: 99.2135% ( 2) 00:18:49.926 2.598 - 2.611: 99.2436% ( 5) 00:18:49.926 2.611 - 2.624: 99.2616% ( 3) 00:18:49.926 2.624 - 2.637: 99.2736% ( 2) 00:18:49.926 2.650 - 2.662: 99.2856% ( 2) 00:18:49.926 2.675 - 2.688: 99.2976% ( 2) 00:18:49.926 2.957 - 2.970: 99.3036% ( 1) 00:18:49.926 3.149 - 3.162: 99.3096% ( 1) 00:18:49.926 3.162 - 3.174: 99.3156% ( 1) 00:18:49.926 5.222 - 5.248: 99.3216% ( 1) 00:18:49.926 5.734 - 5.760: 99.3276% ( 1) 00:18:49.926 5.786 - 5.811: 99.3336% ( 1) 00:18:49.926 5.811 - 5.837: 99.3396% ( 1) 00:18:49.926 6.016 - 6.042: 99.3456% ( 1) 00:18:49.926 6.067 - 6.093: 99.3516% ( 1) 00:18:49.926 6.195 - 6.221: 99.3576% ( 1) 00:18:49.926 6.298 - 6.323: 99.3636% ( 1) 00:18:49.926 6.374 - 6.400: 99.3696% ( 1) 00:18:49.926 6.451 - 6.477: 99.3756% ( 1) 00:18:49.926 6.554 - 6.605: 99.3816% ( 1) 00:18:49.926 6.707 - 6.758: 99.3936% ( 2) 00:18:49.926 7.014 - 7.066: 99.3997% ( 1) 00:18:49.926 7.066 - 7.117: 99.4057% ( 1) 00:18:49.926 7.117 - 7.168: 99.4177% ( 2) 00:18:49.926 7.168 - 7.219: 99.4297% ( 2) 00:18:49.926 7.219 - 7.270: 99.4357% ( 1) 00:18:49.926 7.270 - 7.322: 99.4537% ( 3) 00:18:49.926 7.373 - 7.424: 99.4597% ( 1) 00:18:49.926 7.424 - 7.475: 99.4657% ( 1) 00:18:49.926 7.475 - 7.526: 99.4777% ( 2) 00:18:49.926 7.526 - 7.578: 99.4957% ( 3) 00:18:49.926 7.680 - 7.731: 99.5017% ( 1) 00:18:49.926 7.782 - 7.834: 99.5077% ( 1) 00:18:49.926 7.885 - 7.936: 99.5137% ( 1) 00:18:49.926 7.936 - 7.987: 99.5197% ( 1) 00:18:49.927 8.038 - 8.090: 99.5257% ( 1) 00:18:49.927 8.448 - 8.499: 99.5317% ( 1) 00:18:49.927 8.602 - 8.653: 99.5377% ( 1) 00:18:49.927 8.960 - 9.011: 99.5437% ( 1) 00:18:49.927 9.062 - 9.114: 99.5497% ( 1) 00:18:49.927 9.267 - 9.318: 99.5557% ( 1) 00:18:49.927 13.107 - 13.210: 99.5617% ( 1) 00:18:49.927 14.336 - 14.438: 99.5677% ( 1) 00:18:49.927 3984.589 - 4010.803: 99.9880% ( 70) 00:18:49.927 4037.018 - 4063.232: 99.9940% ( 1) 00:18:49.927 4168.090 - 4194.304: 100.0000% ( 1) 00:18:49.927 00:18:49.927 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:49.927 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:49.927 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:49.927 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:49.927 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:50.186 [ 00:18:50.186 { 00:18:50.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:50.186 "subtype": "Discovery", 00:18:50.186 "listen_addresses": [], 00:18:50.186 "allow_any_host": true, 00:18:50.186 "hosts": [] 00:18:50.186 }, 00:18:50.186 { 00:18:50.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:50.186 "subtype": "NVMe", 00:18:50.186 "listen_addresses": [ 00:18:50.186 { 00:18:50.186 "trtype": "VFIOUSER", 00:18:50.186 "adrfam": "IPv4", 00:18:50.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:50.186 "trsvcid": "0" 00:18:50.186 } 00:18:50.186 ], 00:18:50.186 "allow_any_host": true, 00:18:50.186 "hosts": [], 00:18:50.186 "serial_number": "SPDK1", 00:18:50.186 "model_number": "SPDK bdev Controller", 00:18:50.186 "max_namespaces": 32, 00:18:50.186 "min_cntlid": 1, 00:18:50.186 "max_cntlid": 65519, 00:18:50.186 "namespaces": [ 00:18:50.186 { 00:18:50.186 "nsid": 1, 00:18:50.186 "bdev_name": "Malloc1", 00:18:50.186 "name": "Malloc1", 00:18:50.186 "nguid": "078BD6382C9E4FA29D145FC0D330D4D5", 00:18:50.186 "uuid": "078bd638-2c9e-4fa2-9d14-5fc0d330d4d5" 00:18:50.186 } 00:18:50.186 ] 00:18:50.186 }, 00:18:50.186 { 00:18:50.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:50.186 "subtype": "NVMe", 00:18:50.186 "listen_addresses": [ 00:18:50.186 { 00:18:50.186 "trtype": "VFIOUSER", 00:18:50.186 "adrfam": "IPv4", 00:18:50.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:50.186 "trsvcid": "0" 00:18:50.186 } 00:18:50.186 ], 00:18:50.186 "allow_any_host": true, 00:18:50.186 "hosts": [], 00:18:50.186 "serial_number": "SPDK2", 00:18:50.186 "model_number": "SPDK bdev Controller", 00:18:50.186 "max_namespaces": 32, 00:18:50.186 "min_cntlid": 1, 00:18:50.186 "max_cntlid": 65519, 00:18:50.186 "namespaces": [ 00:18:50.186 { 00:18:50.186 "nsid": 1, 00:18:50.186 "bdev_name": "Malloc2", 00:18:50.186 "name": "Malloc2", 00:18:50.186 "nguid": "4BED410A60934B059031F58B3E6FED1F", 00:18:50.186 "uuid": "4bed410a-6093-4b05-9031-f58b3e6fed1f" 00:18:50.186 } 00:18:50.186 ] 00:18:50.186 } 00:18:50.186 ] 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1347378 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:50.186 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:50.446 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.446 Malloc3 00:18:50.446 [2024-06-10 13:47:04.836057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.446 13:47:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:50.706 [2024-06-10 13:47:05.068923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.706 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:50.706 Asynchronous Event Request test 00:18:50.706 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.706 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.706 Registering asynchronous event callbacks... 00:18:50.706 Starting namespace attribute notice tests for all controllers... 00:18:50.706 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:50.706 aer_cb - Changed Namespace 00:18:50.706 Cleaning up... 00:18:50.965 [ 00:18:50.965 { 00:18:50.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:50.965 "subtype": "Discovery", 00:18:50.965 "listen_addresses": [], 00:18:50.965 "allow_any_host": true, 00:18:50.965 "hosts": [] 00:18:50.965 }, 00:18:50.965 { 00:18:50.965 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:50.965 "subtype": "NVMe", 00:18:50.965 "listen_addresses": [ 00:18:50.965 { 00:18:50.965 "trtype": "VFIOUSER", 00:18:50.965 "adrfam": "IPv4", 00:18:50.965 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:50.965 "trsvcid": "0" 00:18:50.965 } 00:18:50.965 ], 00:18:50.965 "allow_any_host": true, 00:18:50.965 "hosts": [], 00:18:50.965 "serial_number": "SPDK1", 00:18:50.965 "model_number": "SPDK bdev Controller", 00:18:50.965 "max_namespaces": 32, 00:18:50.965 "min_cntlid": 1, 00:18:50.965 "max_cntlid": 65519, 00:18:50.965 "namespaces": [ 00:18:50.965 { 00:18:50.965 "nsid": 1, 00:18:50.965 "bdev_name": "Malloc1", 00:18:50.965 "name": "Malloc1", 00:18:50.965 "nguid": "078BD6382C9E4FA29D145FC0D330D4D5", 00:18:50.965 "uuid": "078bd638-2c9e-4fa2-9d14-5fc0d330d4d5" 00:18:50.965 }, 00:18:50.965 { 00:18:50.965 "nsid": 2, 00:18:50.965 "bdev_name": "Malloc3", 00:18:50.965 "name": "Malloc3", 00:18:50.965 "nguid": "DFF4EACF21C24C8BA34E9BED8125D82F", 00:18:50.965 "uuid": "dff4eacf-21c2-4c8b-a34e-9bed8125d82f" 00:18:50.965 } 00:18:50.965 ] 00:18:50.965 }, 00:18:50.965 { 00:18:50.965 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:50.965 "subtype": "NVMe", 00:18:50.965 "listen_addresses": [ 00:18:50.965 { 00:18:50.965 "trtype": "VFIOUSER", 00:18:50.965 "adrfam": "IPv4", 00:18:50.965 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:50.965 "trsvcid": "0" 00:18:50.965 } 00:18:50.965 ], 00:18:50.965 "allow_any_host": true, 00:18:50.965 "hosts": [], 00:18:50.965 "serial_number": "SPDK2", 00:18:50.965 "model_number": "SPDK bdev Controller", 00:18:50.965 "max_namespaces": 32, 00:18:50.965 "min_cntlid": 1, 00:18:50.965 "max_cntlid": 65519, 00:18:50.965 "namespaces": [ 00:18:50.965 { 00:18:50.965 "nsid": 1, 00:18:50.965 "bdev_name": "Malloc2", 00:18:50.965 "name": "Malloc2", 00:18:50.965 "nguid": "4BED410A60934B059031F58B3E6FED1F", 00:18:50.965 "uuid": "4bed410a-6093-4b05-9031-f58b3e6fed1f" 00:18:50.965 } 00:18:50.965 ] 00:18:50.965 } 00:18:50.965 ] 00:18:50.965 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1347378 00:18:50.965 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:50.965 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:50.965 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:50.965 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:50.965 [2024-06-10 13:47:05.344478] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:18:50.965 [2024-06-10 13:47:05.344525] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347452 ] 00:18:50.965 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.965 [2024-06-10 13:47:05.380929] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:50.965 [2024-06-10 13:47:05.383226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:50.965 [2024-06-10 13:47:05.383255] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efedb65a000 00:18:50.965 [2024-06-10 13:47:05.384232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.385236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.386249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.387261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.388271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.389282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.390287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.391294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.965 [2024-06-10 13:47:05.392305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:50.965 [2024-06-10 13:47:05.392324] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efedb64f000 00:18:50.965 [2024-06-10 13:47:05.393569] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:50.965 [2024-06-10 13:47:05.413107] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:50.965 [2024-06-10 13:47:05.413140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:50.965 [2024-06-10 13:47:05.415212] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:50.965 [2024-06-10 13:47:05.415265] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:50.965 [2024-06-10 13:47:05.415356] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:50.965 [2024-06-10 13:47:05.415379] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:50.965 [2024-06-10 13:47:05.415389] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:50.965 [2024-06-10 13:47:05.416218] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:50.965 [2024-06-10 13:47:05.416233] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:50.965 [2024-06-10 13:47:05.416248] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:50.965 [2024-06-10 13:47:05.417217] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:50.965 [2024-06-10 13:47:05.417231] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:50.965 [2024-06-10 13:47:05.417243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:50.965 [2024-06-10 13:47:05.418226] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:50.965 [2024-06-10 13:47:05.418240] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:50.965 [2024-06-10 13:47:05.419232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:50.965 [2024-06-10 13:47:05.419246] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:50.965 [2024-06-10 13:47:05.419255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:50.965 [2024-06-10 13:47:05.419266] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:50.965 [2024-06-10 13:47:05.419376] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:50.965 [2024-06-10 13:47:05.419384] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:50.965 [2024-06-10 13:47:05.419393] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:50.965 [2024-06-10 13:47:05.420239] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:50.965 [2024-06-10 13:47:05.421242] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:50.965 [2024-06-10 13:47:05.422254] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:50.965 [2024-06-10 13:47:05.423254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:50.966 [2024-06-10 13:47:05.423306] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:50.966 [2024-06-10 13:47:05.424271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:50.966 [2024-06-10 13:47:05.424285] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:50.966 [2024-06-10 13:47:05.424294] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:50.966 [2024-06-10 13:47:05.424320] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:50.966 [2024-06-10 13:47:05.424332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:50.966 [2024-06-10 13:47:05.424352] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:50.966 [2024-06-10 13:47:05.424361] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:50.966 [2024-06-10 13:47:05.424379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:50.966 [2024-06-10 13:47:05.430588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:50.966 [2024-06-10 13:47:05.430606] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:50.966 [2024-06-10 13:47:05.430614] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:50.966 [2024-06-10 13:47:05.430622] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:50.966 [2024-06-10 13:47:05.430634] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:50.966 [2024-06-10 13:47:05.430643] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:50.966 [2024-06-10 13:47:05.430651] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:50.966 [2024-06-10 13:47:05.430659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:50.966 [2024-06-10 13:47:05.430671] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:50.966 [2024-06-10 13:47:05.430685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:51.225 [2024-06-10 13:47:05.438586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:51.225 [2024-06-10 13:47:05.438604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.225 [2024-06-10 13:47:05.438616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.225 [2024-06-10 13:47:05.438628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.225 [2024-06-10 13:47:05.438640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.225 [2024-06-10 13:47:05.438649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:51.225 [2024-06-10 13:47:05.438663] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.438677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.446585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.446597] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:51.226 [2024-06-10 13:47:05.446606] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.446618] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.446627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.446640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.454584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.454646] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.454659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.454671] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:51.226 [2024-06-10 13:47:05.454680] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:51.226 [2024-06-10 13:47:05.454689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.462585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.462601] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:51.226 [2024-06-10 13:47:05.462619] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.462632] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.462643] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:51.226 [2024-06-10 13:47:05.462651] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.226 [2024-06-10 13:47:05.462661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.470585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.470606] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.470619] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.470630] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:51.226 [2024-06-10 13:47:05.470639] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.226 [2024-06-10 13:47:05.470648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.478585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.478600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.478612] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.478627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.478636] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.478645] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.478656] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:51.226 [2024-06-10 13:47:05.478664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:51.226 [2024-06-10 13:47:05.478673] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:51.226 [2024-06-10 13:47:05.478697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.486588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.486610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.494587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.494607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.502585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.502605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.510587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.510608] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:51.226 [2024-06-10 13:47:05.510617] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:51.226 [2024-06-10 13:47:05.510625] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:51.226 [2024-06-10 13:47:05.510633] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:51.226 [2024-06-10 13:47:05.510643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:51.226 [2024-06-10 13:47:05.510654] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:51.226 [2024-06-10 13:47:05.510662] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:51.226 [2024-06-10 13:47:05.510671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.510682] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:51.226 [2024-06-10 13:47:05.510690] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.226 [2024-06-10 13:47:05.510699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.510710] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:51.226 [2024-06-10 13:47:05.510718] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:51.226 [2024-06-10 13:47:05.510727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.518584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.518608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.518623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.518640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:51.226 ===================================================== 00:18:51.226 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:51.226 ===================================================== 00:18:51.226 Controller Capabilities/Features 00:18:51.226 ================================ 00:18:51.226 Vendor ID: 4e58 00:18:51.226 Subsystem Vendor ID: 4e58 00:18:51.226 Serial Number: SPDK2 00:18:51.226 Model Number: SPDK bdev Controller 00:18:51.226 Firmware Version: 24.09 00:18:51.226 Recommended Arb Burst: 6 00:18:51.226 IEEE OUI Identifier: 8d 6b 50 00:18:51.226 Multi-path I/O 00:18:51.226 May have multiple subsystem ports: Yes 00:18:51.226 May have multiple controllers: Yes 00:18:51.226 Associated with SR-IOV VF: No 00:18:51.226 Max Data Transfer Size: 131072 00:18:51.226 Max Number of Namespaces: 32 00:18:51.226 Max Number of I/O Queues: 127 00:18:51.226 NVMe Specification Version (VS): 1.3 00:18:51.226 NVMe Specification Version (Identify): 1.3 00:18:51.226 Maximum Queue Entries: 256 00:18:51.226 Contiguous Queues Required: Yes 00:18:51.226 Arbitration Mechanisms Supported 00:18:51.226 Weighted Round Robin: Not Supported 00:18:51.226 Vendor Specific: Not Supported 00:18:51.226 Reset Timeout: 15000 ms 00:18:51.226 Doorbell Stride: 4 bytes 00:18:51.226 NVM Subsystem Reset: Not Supported 00:18:51.226 Command Sets Supported 00:18:51.226 NVM Command Set: Supported 00:18:51.226 Boot Partition: Not Supported 00:18:51.226 Memory Page Size Minimum: 4096 bytes 00:18:51.226 Memory Page Size Maximum: 4096 bytes 00:18:51.226 Persistent Memory Region: Not Supported 00:18:51.226 Optional Asynchronous Events Supported 00:18:51.226 Namespace Attribute Notices: Supported 00:18:51.226 Firmware Activation Notices: Not Supported 00:18:51.226 ANA Change Notices: Not Supported 00:18:51.226 PLE Aggregate Log Change Notices: Not Supported 00:18:51.226 LBA Status Info Alert Notices: Not Supported 00:18:51.226 EGE Aggregate Log Change Notices: Not Supported 00:18:51.226 Normal NVM Subsystem Shutdown event: Not Supported 00:18:51.226 Zone Descriptor Change Notices: Not Supported 00:18:51.226 Discovery Log Change Notices: Not Supported 00:18:51.226 Controller Attributes 00:18:51.226 128-bit Host Identifier: Supported 00:18:51.226 Non-Operational Permissive Mode: Not Supported 00:18:51.226 NVM Sets: Not Supported 00:18:51.226 Read Recovery Levels: Not Supported 00:18:51.226 Endurance Groups: Not Supported 00:18:51.226 Predictable Latency Mode: Not Supported 00:18:51.226 Traffic Based Keep ALive: Not Supported 00:18:51.226 Namespace Granularity: Not Supported 00:18:51.226 SQ Associations: Not Supported 00:18:51.226 UUID List: Not Supported 00:18:51.226 Multi-Domain Subsystem: Not Supported 00:18:51.226 Fixed Capacity Management: Not Supported 00:18:51.226 Variable Capacity Management: Not Supported 00:18:51.226 Delete Endurance Group: Not Supported 00:18:51.226 Delete NVM Set: Not Supported 00:18:51.226 Extended LBA Formats Supported: Not Supported 00:18:51.226 Flexible Data Placement Supported: Not Supported 00:18:51.226 00:18:51.226 Controller Memory Buffer Support 00:18:51.226 ================================ 00:18:51.226 Supported: No 00:18:51.226 00:18:51.226 Persistent Memory Region Support 00:18:51.226 ================================ 00:18:51.226 Supported: No 00:18:51.226 00:18:51.226 Admin Command Set Attributes 00:18:51.226 ============================ 00:18:51.226 Security Send/Receive: Not Supported 00:18:51.226 Format NVM: Not Supported 00:18:51.226 Firmware Activate/Download: Not Supported 00:18:51.226 Namespace Management: Not Supported 00:18:51.226 Device Self-Test: Not Supported 00:18:51.226 Directives: Not Supported 00:18:51.226 NVMe-MI: Not Supported 00:18:51.226 Virtualization Management: Not Supported 00:18:51.226 Doorbell Buffer Config: Not Supported 00:18:51.226 Get LBA Status Capability: Not Supported 00:18:51.226 Command & Feature Lockdown Capability: Not Supported 00:18:51.226 Abort Command Limit: 4 00:18:51.226 Async Event Request Limit: 4 00:18:51.226 Number of Firmware Slots: N/A 00:18:51.226 Firmware Slot 1 Read-Only: N/A 00:18:51.226 Firmware Activation Without Reset: N/A 00:18:51.226 Multiple Update Detection Support: N/A 00:18:51.226 Firmware Update Granularity: No Information Provided 00:18:51.226 Per-Namespace SMART Log: No 00:18:51.226 Asymmetric Namespace Access Log Page: Not Supported 00:18:51.226 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:51.226 Command Effects Log Page: Supported 00:18:51.226 Get Log Page Extended Data: Supported 00:18:51.226 Telemetry Log Pages: Not Supported 00:18:51.226 Persistent Event Log Pages: Not Supported 00:18:51.226 Supported Log Pages Log Page: May Support 00:18:51.226 Commands Supported & Effects Log Page: Not Supported 00:18:51.226 Feature Identifiers & Effects Log Page:May Support 00:18:51.226 NVMe-MI Commands & Effects Log Page: May Support 00:18:51.226 Data Area 4 for Telemetry Log: Not Supported 00:18:51.226 Error Log Page Entries Supported: 128 00:18:51.226 Keep Alive: Supported 00:18:51.226 Keep Alive Granularity: 10000 ms 00:18:51.226 00:18:51.226 NVM Command Set Attributes 00:18:51.226 ========================== 00:18:51.226 Submission Queue Entry Size 00:18:51.226 Max: 64 00:18:51.226 Min: 64 00:18:51.226 Completion Queue Entry Size 00:18:51.226 Max: 16 00:18:51.226 Min: 16 00:18:51.226 Number of Namespaces: 32 00:18:51.226 Compare Command: Supported 00:18:51.226 Write Uncorrectable Command: Not Supported 00:18:51.226 Dataset Management Command: Supported 00:18:51.226 Write Zeroes Command: Supported 00:18:51.226 Set Features Save Field: Not Supported 00:18:51.226 Reservations: Not Supported 00:18:51.226 Timestamp: Not Supported 00:18:51.226 Copy: Supported 00:18:51.226 Volatile Write Cache: Present 00:18:51.226 Atomic Write Unit (Normal): 1 00:18:51.226 Atomic Write Unit (PFail): 1 00:18:51.226 Atomic Compare & Write Unit: 1 00:18:51.226 Fused Compare & Write: Supported 00:18:51.226 Scatter-Gather List 00:18:51.226 SGL Command Set: Supported (Dword aligned) 00:18:51.226 SGL Keyed: Not Supported 00:18:51.226 SGL Bit Bucket Descriptor: Not Supported 00:18:51.226 SGL Metadata Pointer: Not Supported 00:18:51.226 Oversized SGL: Not Supported 00:18:51.226 SGL Metadata Address: Not Supported 00:18:51.226 SGL Offset: Not Supported 00:18:51.226 Transport SGL Data Block: Not Supported 00:18:51.226 Replay Protected Memory Block: Not Supported 00:18:51.226 00:18:51.226 Firmware Slot Information 00:18:51.226 ========================= 00:18:51.226 Active slot: 1 00:18:51.226 Slot 1 Firmware Revision: 24.09 00:18:51.226 00:18:51.226 00:18:51.226 Commands Supported and Effects 00:18:51.226 ============================== 00:18:51.226 Admin Commands 00:18:51.226 -------------- 00:18:51.226 Get Log Page (02h): Supported 00:18:51.226 Identify (06h): Supported 00:18:51.226 Abort (08h): Supported 00:18:51.226 Set Features (09h): Supported 00:18:51.226 Get Features (0Ah): Supported 00:18:51.226 Asynchronous Event Request (0Ch): Supported 00:18:51.226 Keep Alive (18h): Supported 00:18:51.226 I/O Commands 00:18:51.226 ------------ 00:18:51.226 Flush (00h): Supported LBA-Change 00:18:51.226 Write (01h): Supported LBA-Change 00:18:51.226 Read (02h): Supported 00:18:51.226 Compare (05h): Supported 00:18:51.226 Write Zeroes (08h): Supported LBA-Change 00:18:51.226 Dataset Management (09h): Supported LBA-Change 00:18:51.226 Copy (19h): Supported LBA-Change 00:18:51.226 Unknown (79h): Supported LBA-Change 00:18:51.226 Unknown (7Ah): Supported 00:18:51.226 00:18:51.226 Error Log 00:18:51.226 ========= 00:18:51.226 00:18:51.226 Arbitration 00:18:51.226 =========== 00:18:51.226 Arbitration Burst: 1 00:18:51.226 00:18:51.226 Power Management 00:18:51.226 ================ 00:18:51.226 Number of Power States: 1 00:18:51.226 Current Power State: Power State #0 00:18:51.226 Power State #0: 00:18:51.226 Max Power: 0.00 W 00:18:51.226 Non-Operational State: Operational 00:18:51.226 Entry Latency: Not Reported 00:18:51.226 Exit Latency: Not Reported 00:18:51.226 Relative Read Throughput: 0 00:18:51.226 Relative Read Latency: 0 00:18:51.226 Relative Write Throughput: 0 00:18:51.226 Relative Write Latency: 0 00:18:51.226 Idle Power: Not Reported 00:18:51.226 Active Power: Not Reported 00:18:51.226 Non-Operational Permissive Mode: Not Supported 00:18:51.226 00:18:51.226 Health Information 00:18:51.226 ================== 00:18:51.226 Critical Warnings: 00:18:51.226 Available Spare Space: OK 00:18:51.226 Temperature: OK 00:18:51.226 Device Reliability: OK 00:18:51.226 Read Only: No 00:18:51.226 Volatile Memory Backup: OK 00:18:51.226 Current Temperature: 0 Kelvin (-2[2024-06-10 13:47:05.518762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:51.226 [2024-06-10 13:47:05.526586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.526627] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:51.226 [2024-06-10 13:47:05.526643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.526655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.526666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.226 [2024-06-10 13:47:05.526756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:51.226 [2024-06-10 13:47:05.526773] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:51.226 [2024-06-10 13:47:05.527767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.227 [2024-06-10 13:47:05.527828] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:51.227 [2024-06-10 13:47:05.527839] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:51.227 [2024-06-10 13:47:05.530585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:51.227 [2024-06-10 13:47:05.530604] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 2 milliseconds 00:18:51.227 [2024-06-10 13:47:05.530663] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:51.227 [2024-06-10 13:47:05.531965] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:51.227 73 Celsius) 00:18:51.227 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:51.227 Available Spare: 0% 00:18:51.227 Available Spare Threshold: 0% 00:18:51.227 Life Percentage Used: 0% 00:18:51.227 Data Units Read: 0 00:18:51.227 Data Units Written: 0 00:18:51.227 Host Read Commands: 0 00:18:51.227 Host Write Commands: 0 00:18:51.227 Controller Busy Time: 0 minutes 00:18:51.227 Power Cycles: 0 00:18:51.227 Power On Hours: 0 hours 00:18:51.227 Unsafe Shutdowns: 0 00:18:51.227 Unrecoverable Media Errors: 0 00:18:51.227 Lifetime Error Log Entries: 0 00:18:51.227 Warning Temperature Time: 0 minutes 00:18:51.227 Critical Temperature Time: 0 minutes 00:18:51.227 00:18:51.227 Number of Queues 00:18:51.227 ================ 00:18:51.227 Number of I/O Submission Queues: 127 00:18:51.227 Number of I/O Completion Queues: 127 00:18:51.227 00:18:51.227 Active Namespaces 00:18:51.227 ================= 00:18:51.227 Namespace ID:1 00:18:51.227 Error Recovery Timeout: Unlimited 00:18:51.227 Command Set Identifier: NVM (00h) 00:18:51.227 Deallocate: Supported 00:18:51.227 Deallocated/Unwritten Error: Not Supported 00:18:51.227 Deallocated Read Value: Unknown 00:18:51.227 Deallocate in Write Zeroes: Not Supported 00:18:51.227 Deallocated Guard Field: 0xFFFF 00:18:51.227 Flush: Supported 00:18:51.227 Reservation: Supported 00:18:51.227 Namespace Sharing Capabilities: Multiple Controllers 00:18:51.227 Size (in LBAs): 131072 (0GiB) 00:18:51.227 Capacity (in LBAs): 131072 (0GiB) 00:18:51.227 Utilization (in LBAs): 131072 (0GiB) 00:18:51.227 NGUID: 4BED410A60934B059031F58B3E6FED1F 00:18:51.227 UUID: 4bed410a-6093-4b05-9031-f58b3e6fed1f 00:18:51.227 Thin Provisioning: Not Supported 00:18:51.227 Per-NS Atomic Units: Yes 00:18:51.227 Atomic Boundary Size (Normal): 0 00:18:51.227 Atomic Boundary Size (PFail): 0 00:18:51.227 Atomic Boundary Offset: 0 00:18:51.227 Maximum Single Source Range Length: 65535 00:18:51.227 Maximum Copy Length: 65535 00:18:51.227 Maximum Source Range Count: 1 00:18:51.227 NGUID/EUI64 Never Reused: No 00:18:51.227 Namespace Write Protected: No 00:18:51.227 Number of LBA Formats: 1 00:18:51.227 Current LBA Format: LBA Format #00 00:18:51.227 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:51.227 00:18:51.227 13:47:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:51.227 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.486 [2024-06-10 13:47:05.776903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:56.758 Initializing NVMe Controllers 00:18:56.758 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:56.758 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:56.758 Initialization complete. Launching workers. 00:18:56.758 ======================================================== 00:18:56.758 Latency(us) 00:18:56.758 Device Information : IOPS MiB/s Average min max 00:18:56.758 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40068.51 156.52 3193.89 1005.45 6938.86 00:18:56.758 ======================================================== 00:18:56.758 Total : 40068.51 156.52 3193.89 1005.45 6938.86 00:18:56.758 00:18:56.758 [2024-06-10 13:47:10.886863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:56.758 13:47:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:56.758 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.758 [2024-06-10 13:47:11.140681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:02.031 Initializing NVMe Controllers 00:19:02.031 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:02.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:02.031 Initialization complete. Launching workers. 00:19:02.031 ======================================================== 00:19:02.031 Latency(us) 00:19:02.031 Device Information : IOPS MiB/s Average min max 00:19:02.031 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30294.73 118.34 4224.30 1282.62 7361.46 00:19:02.031 ======================================================== 00:19:02.031 Total : 30294.73 118.34 4224.30 1282.62 7361.46 00:19:02.031 00:19:02.031 [2024-06-10 13:47:16.163911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:02.031 13:47:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:02.031 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.031 [2024-06-10 13:47:16.463224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.304 [2024-06-10 13:47:21.598685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.304 Initializing NVMe Controllers 00:19:07.304 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:07.304 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:07.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:07.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:07.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:07.305 Initialization complete. Launching workers. 00:19:07.305 Starting thread on core 2 00:19:07.305 Starting thread on core 3 00:19:07.305 Starting thread on core 1 00:19:07.305 13:47:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:07.305 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.564 [2024-06-10 13:47:21.987129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:11.759 [2024-06-10 13:47:25.977851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:11.759 Initializing NVMe Controllers 00:19:11.759 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:11.759 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:11.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:11.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:11.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:11.759 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:11.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:11.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:11.759 Initialization complete. Launching workers. 00:19:11.759 Starting thread on core 1 with urgent priority queue 00:19:11.759 Starting thread on core 2 with urgent priority queue 00:19:11.759 Starting thread on core 3 with urgent priority queue 00:19:11.759 Starting thread on core 0 with urgent priority queue 00:19:11.759 SPDK bdev Controller (SPDK2 ) core 0: 3450.33 IO/s 28.98 secs/100000 ios 00:19:11.759 SPDK bdev Controller (SPDK2 ) core 1: 4031.33 IO/s 24.81 secs/100000 ios 00:19:11.759 SPDK bdev Controller (SPDK2 ) core 2: 2775.33 IO/s 36.03 secs/100000 ios 00:19:11.759 SPDK bdev Controller (SPDK2 ) core 3: 3007.00 IO/s 33.26 secs/100000 ios 00:19:11.759 ======================================================== 00:19:11.759 00:19:11.759 13:47:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:11.759 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.017 [2024-06-10 13:47:26.355109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.017 Initializing NVMe Controllers 00:19:12.017 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.017 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.017 Namespace ID: 1 size: 0GB 00:19:12.017 Initialization complete. 00:19:12.017 INFO: using host memory buffer for IO 00:19:12.017 Hello world! 00:19:12.017 [2024-06-10 13:47:26.368193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.017 13:47:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:12.277 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.277 [2024-06-10 13:47:26.741920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:13.654 Initializing NVMe Controllers 00:19:13.654 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:13.654 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:13.654 Initialization complete. Launching workers. 00:19:13.654 submit (in ns) avg, min, max = 7615.7, 4068.8, 4002135.2 00:19:13.654 complete (in ns) avg, min, max = 26956.0, 2391.2, 4003453.6 00:19:13.654 00:19:13.654 Submit histogram 00:19:13.654 ================ 00:19:13.654 Range in us Cumulative Count 00:19:13.654 4.045 - 4.070: 0.0150% ( 2) 00:19:13.654 4.070 - 4.096: 0.4421% ( 57) 00:19:13.654 4.096 - 4.122: 1.7383% ( 173) 00:19:13.654 4.122 - 4.147: 4.7505% ( 402) 00:19:13.654 4.147 - 4.173: 11.1045% ( 848) 00:19:13.654 4.173 - 4.198: 18.7697% ( 1023) 00:19:13.654 4.198 - 4.224: 27.7312% ( 1196) 00:19:13.654 4.224 - 4.250: 35.2091% ( 998) 00:19:13.654 4.250 - 4.275: 41.9002% ( 893) 00:19:13.654 4.275 - 4.301: 51.1614% ( 1236) 00:19:13.654 4.301 - 4.326: 60.2278% ( 1210) 00:19:13.654 4.326 - 4.352: 71.5945% ( 1517) 00:19:13.654 4.352 - 4.378: 80.5635% ( 1197) 00:19:13.654 4.378 - 4.403: 84.9918% ( 591) 00:19:13.654 4.403 - 4.429: 87.0148% ( 270) 00:19:13.654 4.429 - 4.454: 88.1088% ( 146) 00:19:13.654 4.454 - 4.480: 89.2177% ( 148) 00:19:13.654 4.480 - 4.506: 90.1843% ( 129) 00:19:13.654 4.506 - 4.531: 91.5106% ( 177) 00:19:13.654 4.531 - 4.557: 92.9792% ( 196) 00:19:13.654 4.557 - 4.582: 94.6276% ( 220) 00:19:13.654 4.582 - 4.608: 96.0438% ( 189) 00:19:13.654 4.608 - 4.634: 97.4150% ( 183) 00:19:13.654 4.634 - 4.659: 98.3740% ( 128) 00:19:13.654 4.659 - 4.685: 98.9135% ( 72) 00:19:13.654 4.685 - 4.710: 99.2432% ( 44) 00:19:13.654 4.710 - 4.736: 99.4830% ( 32) 00:19:13.654 4.736 - 4.762: 99.5879% ( 14) 00:19:13.654 4.762 - 4.787: 99.6029% ( 2) 00:19:13.654 4.787 - 4.813: 99.6179% ( 2) 00:19:13.654 4.813 - 4.838: 99.6328% ( 2) 00:19:13.654 5.094 - 5.120: 99.6403% ( 1) 00:19:13.654 7.322 - 7.373: 99.6478% ( 1) 00:19:13.654 7.424 - 7.475: 99.6553% ( 1) 00:19:13.654 7.526 - 7.578: 99.6628% ( 1) 00:19:13.654 7.629 - 7.680: 99.6703% ( 1) 00:19:13.654 7.680 - 7.731: 99.6778% ( 1) 00:19:13.654 7.731 - 7.782: 99.6853% ( 1) 00:19:13.654 7.782 - 7.834: 99.7003% ( 2) 00:19:13.654 7.834 - 7.885: 99.7153% ( 2) 00:19:13.654 7.987 - 8.038: 99.7228% ( 1) 00:19:13.654 8.141 - 8.192: 99.7303% ( 1) 00:19:13.654 8.346 - 8.397: 99.7377% ( 1) 00:19:13.654 8.397 - 8.448: 99.7452% ( 1) 00:19:13.654 8.448 - 8.499: 99.7602% ( 2) 00:19:13.654 8.602 - 8.653: 99.7677% ( 1) 00:19:13.654 8.806 - 8.858: 99.7752% ( 1) 00:19:13.654 8.909 - 8.960: 99.7827% ( 1) 00:19:13.654 8.960 - 9.011: 99.7977% ( 2) 00:19:13.654 9.011 - 9.062: 99.8052% ( 1) 00:19:13.654 9.062 - 9.114: 99.8127% ( 1) 00:19:13.654 9.114 - 9.165: 99.8202% ( 1) 00:19:13.654 9.267 - 9.318: 99.8277% ( 1) 00:19:13.654 9.318 - 9.370: 99.8352% ( 1) 00:19:13.654 9.677 - 9.728: 99.8426% ( 1) 00:19:13.654 9.830 - 9.882: 99.8501% ( 1) 00:19:13.654 10.086 - 10.138: 99.8576% ( 1) 00:19:13.654 10.138 - 10.189: 99.8651% ( 1) 00:19:13.654 10.291 - 10.342: 99.8726% ( 1) 00:19:13.654 10.342 - 10.394: 99.8801% ( 1) 00:19:13.654 10.906 - 10.957: 99.8876% ( 1) 00:19:13.654 12.032 - 12.083: 99.8951% ( 1) 00:19:13.654 12.288 - 12.339: 99.9026% ( 1) 00:19:13.654 14.950 - 15.053: 99.9101% ( 1) 00:19:13.654 15.872 - 15.974: 99.9176% ( 1) 00:19:13.654 3984.589 - 4010.803: 100.0000% ( 11) 00:19:13.655 00:19:13.655 Complete histogram 00:19:13.655 ================== 00:19:13.655 Range in us Cumulative Count 00:19:13.655 2.381 - 2.394: 0.0674% ( 9) 00:19:13.655 2.394 - 2.406: 5.3649% ( 707) 00:19:13.655 2.406 - 2.419: 38.2961% ( 4395) 00:19:13.655 2.419 - 2.432: 75.5208% ( 4968) 00:19:13.655 2.432 - 2.445: 86.4154% ( 1454) 00:19:13.655 2.445 - 2.458: 89.9071% ( 466) 00:19:13.655 2.458 - 2.470: 93.0016% ( 413) 00:19:13.655 2.470 - 2.483: 94.9798% ( 264) 00:19:13.655 2.483 - 2.496: 96.5458% ( 209) 00:19:13.655 2.496 - 2.509: 97.9320% ( 185) 00:19:13.655 2.509 - 2.522: 98.6063% ( 90) 00:19:13.655 2.522 - [2024-06-10 13:47:27.846503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:13.655 2.534: 98.8161% ( 28) 00:19:13.655 2.534 - 2.547: 98.8985% ( 11) 00:19:13.655 2.547 - 2.560: 98.9510% ( 7) 00:19:13.655 2.560 - 2.573: 99.0034% ( 7) 00:19:13.655 2.573 - 2.586: 99.0334% ( 4) 00:19:13.655 2.586 - 2.598: 99.0484% ( 2) 00:19:13.655 2.598 - 2.611: 99.0859% ( 5) 00:19:13.655 2.611 - 2.624: 99.0934% ( 1) 00:19:13.655 2.624 - 2.637: 99.1158% ( 3) 00:19:13.655 2.637 - 2.650: 99.1458% ( 4) 00:19:13.655 2.650 - 2.662: 99.1533% ( 1) 00:19:13.655 2.688 - 2.701: 99.1608% ( 1) 00:19:13.655 2.701 - 2.714: 99.1683% ( 1) 00:19:13.655 2.726 - 2.739: 99.1833% ( 2) 00:19:13.655 2.739 - 2.752: 99.1908% ( 1) 00:19:13.655 3.098 - 3.110: 99.1983% ( 1) 00:19:13.655 5.197 - 5.222: 99.2058% ( 1) 00:19:13.655 5.325 - 5.350: 99.2132% ( 1) 00:19:13.655 5.530 - 5.555: 99.2207% ( 1) 00:19:13.655 5.658 - 5.683: 99.2282% ( 1) 00:19:13.655 5.862 - 5.888: 99.2357% ( 1) 00:19:13.655 5.990 - 6.016: 99.2432% ( 1) 00:19:13.655 6.298 - 6.323: 99.2507% ( 1) 00:19:13.655 6.374 - 6.400: 99.2582% ( 1) 00:19:13.655 6.502 - 6.528: 99.2657% ( 1) 00:19:13.655 6.758 - 6.810: 99.2732% ( 1) 00:19:13.655 6.810 - 6.861: 99.2957% ( 3) 00:19:13.655 7.066 - 7.117: 99.3032% ( 1) 00:19:13.655 7.117 - 7.168: 99.3107% ( 1) 00:19:13.655 7.168 - 7.219: 99.3181% ( 1) 00:19:13.655 7.373 - 7.424: 99.3256% ( 1) 00:19:13.655 7.782 - 7.834: 99.3331% ( 1) 00:19:13.655 7.987 - 8.038: 99.3406% ( 1) 00:19:13.655 8.090 - 8.141: 99.3481% ( 1) 00:19:13.655 8.141 - 8.192: 99.3556% ( 1) 00:19:13.655 8.243 - 8.294: 99.3631% ( 1) 00:19:13.655 12.390 - 12.442: 99.3706% ( 1) 00:19:13.655 12.646 - 12.698: 99.3781% ( 1) 00:19:13.655 211.354 - 212.992: 99.3856% ( 1) 00:19:13.655 3316.122 - 3329.229: 99.3931% ( 1) 00:19:13.655 3984.589 - 4010.803: 100.0000% ( 81) 00:19:13.655 00:19:13.655 13:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:13.655 13:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:13.655 13:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:13.655 13:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:13.655 13:47:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:13.655 [ 00:19:13.655 { 00:19:13.655 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:13.655 "subtype": "Discovery", 00:19:13.655 "listen_addresses": [], 00:19:13.655 "allow_any_host": true, 00:19:13.655 "hosts": [] 00:19:13.655 }, 00:19:13.655 { 00:19:13.655 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:13.655 "subtype": "NVMe", 00:19:13.655 "listen_addresses": [ 00:19:13.655 { 00:19:13.655 "trtype": "VFIOUSER", 00:19:13.655 "adrfam": "IPv4", 00:19:13.655 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:13.655 "trsvcid": "0" 00:19:13.655 } 00:19:13.655 ], 00:19:13.655 "allow_any_host": true, 00:19:13.655 "hosts": [], 00:19:13.655 "serial_number": "SPDK1", 00:19:13.655 "model_number": "SPDK bdev Controller", 00:19:13.655 "max_namespaces": 32, 00:19:13.655 "min_cntlid": 1, 00:19:13.655 "max_cntlid": 65519, 00:19:13.655 "namespaces": [ 00:19:13.655 { 00:19:13.655 "nsid": 1, 00:19:13.655 "bdev_name": "Malloc1", 00:19:13.655 "name": "Malloc1", 00:19:13.655 "nguid": "078BD6382C9E4FA29D145FC0D330D4D5", 00:19:13.655 "uuid": "078bd638-2c9e-4fa2-9d14-5fc0d330d4d5" 00:19:13.655 }, 00:19:13.655 { 00:19:13.655 "nsid": 2, 00:19:13.655 "bdev_name": "Malloc3", 00:19:13.655 "name": "Malloc3", 00:19:13.655 "nguid": "DFF4EACF21C24C8BA34E9BED8125D82F", 00:19:13.655 "uuid": "dff4eacf-21c2-4c8b-a34e-9bed8125d82f" 00:19:13.655 } 00:19:13.655 ] 00:19:13.655 }, 00:19:13.655 { 00:19:13.655 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:13.655 "subtype": "NVMe", 00:19:13.655 "listen_addresses": [ 00:19:13.655 { 00:19:13.655 "trtype": "VFIOUSER", 00:19:13.655 "adrfam": "IPv4", 00:19:13.655 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:13.655 "trsvcid": "0" 00:19:13.655 } 00:19:13.655 ], 00:19:13.655 "allow_any_host": true, 00:19:13.655 "hosts": [], 00:19:13.655 "serial_number": "SPDK2", 00:19:13.655 "model_number": "SPDK bdev Controller", 00:19:13.655 "max_namespaces": 32, 00:19:13.655 "min_cntlid": 1, 00:19:13.655 "max_cntlid": 65519, 00:19:13.655 "namespaces": [ 00:19:13.655 { 00:19:13.655 "nsid": 1, 00:19:13.655 "bdev_name": "Malloc2", 00:19:13.655 "name": "Malloc2", 00:19:13.655 "nguid": "4BED410A60934B059031F58B3E6FED1F", 00:19:13.655 "uuid": "4bed410a-6093-4b05-9031-f58b3e6fed1f" 00:19:13.655 } 00:19:13.655 ] 00:19:13.655 } 00:19:13.655 ] 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1351303 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:13.915 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:13.915 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.915 [2024-06-10 13:47:28.370441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:13.915 Malloc4 00:19:14.174 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:14.174 [2024-06-10 13:47:28.611231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:14.174 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:14.434 Asynchronous Event Request test 00:19:14.434 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:14.434 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:14.434 Registering asynchronous event callbacks... 00:19:14.434 Starting namespace attribute notice tests for all controllers... 00:19:14.434 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:14.434 aer_cb - Changed Namespace 00:19:14.434 Cleaning up... 00:19:14.434 [ 00:19:14.434 { 00:19:14.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:14.434 "subtype": "Discovery", 00:19:14.434 "listen_addresses": [], 00:19:14.434 "allow_any_host": true, 00:19:14.434 "hosts": [] 00:19:14.434 }, 00:19:14.434 { 00:19:14.434 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:14.434 "subtype": "NVMe", 00:19:14.434 "listen_addresses": [ 00:19:14.434 { 00:19:14.434 "trtype": "VFIOUSER", 00:19:14.434 "adrfam": "IPv4", 00:19:14.434 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:14.434 "trsvcid": "0" 00:19:14.434 } 00:19:14.434 ], 00:19:14.434 "allow_any_host": true, 00:19:14.434 "hosts": [], 00:19:14.434 "serial_number": "SPDK1", 00:19:14.434 "model_number": "SPDK bdev Controller", 00:19:14.434 "max_namespaces": 32, 00:19:14.434 "min_cntlid": 1, 00:19:14.434 "max_cntlid": 65519, 00:19:14.434 "namespaces": [ 00:19:14.434 { 00:19:14.434 "nsid": 1, 00:19:14.434 "bdev_name": "Malloc1", 00:19:14.434 "name": "Malloc1", 00:19:14.434 "nguid": "078BD6382C9E4FA29D145FC0D330D4D5", 00:19:14.434 "uuid": "078bd638-2c9e-4fa2-9d14-5fc0d330d4d5" 00:19:14.434 }, 00:19:14.434 { 00:19:14.434 "nsid": 2, 00:19:14.434 "bdev_name": "Malloc3", 00:19:14.434 "name": "Malloc3", 00:19:14.434 "nguid": "DFF4EACF21C24C8BA34E9BED8125D82F", 00:19:14.434 "uuid": "dff4eacf-21c2-4c8b-a34e-9bed8125d82f" 00:19:14.434 } 00:19:14.434 ] 00:19:14.434 }, 00:19:14.434 { 00:19:14.434 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:14.434 "subtype": "NVMe", 00:19:14.434 "listen_addresses": [ 00:19:14.434 { 00:19:14.434 "trtype": "VFIOUSER", 00:19:14.434 "adrfam": "IPv4", 00:19:14.434 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:14.434 "trsvcid": "0" 00:19:14.434 } 00:19:14.434 ], 00:19:14.434 "allow_any_host": true, 00:19:14.434 "hosts": [], 00:19:14.434 "serial_number": "SPDK2", 00:19:14.434 "model_number": "SPDK bdev Controller", 00:19:14.434 "max_namespaces": 32, 00:19:14.434 "min_cntlid": 1, 00:19:14.434 "max_cntlid": 65519, 00:19:14.434 "namespaces": [ 00:19:14.434 { 00:19:14.434 "nsid": 1, 00:19:14.434 "bdev_name": "Malloc2", 00:19:14.434 "name": "Malloc2", 00:19:14.434 "nguid": "4BED410A60934B059031F58B3E6FED1F", 00:19:14.434 "uuid": "4bed410a-6093-4b05-9031-f58b3e6fed1f" 00:19:14.434 }, 00:19:14.434 { 00:19:14.434 "nsid": 2, 00:19:14.434 "bdev_name": "Malloc4", 00:19:14.434 "name": "Malloc4", 00:19:14.434 "nguid": "9571F45D537F4FE886771FDEA9D3F2D5", 00:19:14.434 "uuid": "9571f45d-537f-4fe8-8677-1fdea9d3f2d5" 00:19:14.434 } 00:19:14.434 ] 00:19:14.434 } 00:19:14.434 ] 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1351303 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1343016 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1343016 ']' 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1343016 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:14.434 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1343016 00:19:14.692 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:14.692 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:14.692 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1343016' 00:19:14.692 killing process with pid 1343016 00:19:14.692 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1343016 00:19:14.692 13:47:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1343016 00:19:14.951 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1351468 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1351468' 00:19:14.952 Process pid: 1351468 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1351468 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1351468 ']' 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:14.952 13:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:14.952 [2024-06-10 13:47:29.266601] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:14.952 [2024-06-10 13:47:29.267813] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:19:14.952 [2024-06-10 13:47:29.267859] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.952 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.952 [2024-06-10 13:47:29.390021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.211 [2024-06-10 13:47:29.468349] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.211 [2024-06-10 13:47:29.468403] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.211 [2024-06-10 13:47:29.468418] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.211 [2024-06-10 13:47:29.468430] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.211 [2024-06-10 13:47:29.468440] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.211 [2024-06-10 13:47:29.468503] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.211 [2024-06-10 13:47:29.468602] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.211 [2024-06-10 13:47:29.468691] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.211 [2024-06-10 13:47:29.468692] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.211 [2024-06-10 13:47:29.559161] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:15.211 [2024-06-10 13:47:29.559374] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:15.211 [2024-06-10 13:47:29.560038] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:15.211 [2024-06-10 13:47:29.560244] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:15.211 [2024-06-10 13:47:29.560645] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:15.778 13:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:15.778 13:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:19:15.778 13:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:16.772 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:17.031 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:17.031 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:17.031 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:17.031 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:17.031 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:17.290 Malloc1 00:19:17.290 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:17.549 13:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:17.807 13:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:18.066 13:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:18.066 13:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:18.066 13:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:18.325 Malloc2 00:19:18.325 13:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:18.584 13:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1351468 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1351468 ']' 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1351468 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:18.842 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1351468 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1351468' 00:19:19.101 killing process with pid 1351468 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1351468 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1351468 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:19.101 00:19:19.101 real 0m54.526s 00:19:19.101 user 3m33.793s 00:19:19.101 sys 0m5.757s 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:19.101 13:47:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:19.101 ************************************ 00:19:19.101 END TEST nvmf_vfio_user 00:19:19.101 ************************************ 00:19:19.361 13:47:33 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:19.361 13:47:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:19.361 13:47:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:19.361 13:47:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.361 ************************************ 00:19:19.361 START TEST nvmf_vfio_user_nvme_compliance 00:19:19.361 ************************************ 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:19.361 * Looking for test storage... 00:19:19.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.361 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1352354 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1352354' 00:19:19.362 Process pid: 1352354 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1352354 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 1352354 ']' 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:19.362 13:47:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:19.362 [2024-06-10 13:47:33.822423] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:19:19.362 [2024-06-10 13:47:33.822491] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.621 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.621 [2024-06-10 13:47:33.944494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.621 [2024-06-10 13:47:34.027798] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.621 [2024-06-10 13:47:34.027846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.621 [2024-06-10 13:47:34.027860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.621 [2024-06-10 13:47:34.027872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.621 [2024-06-10 13:47:34.027882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.621 [2024-06-10 13:47:34.027938] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.621 [2024-06-10 13:47:34.028044] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.621 [2024-06-10 13:47:34.028047] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.557 13:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:20.557 13:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:19:20.557 13:47:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.494 malloc0 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.494 13:47:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:21.494 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.494 00:19:21.494 00:19:21.494 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.494 http://cunit.sourceforge.net/ 00:19:21.494 00:19:21.494 00:19:21.494 Suite: nvme_compliance 00:19:21.753 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 13:47:36.001125] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.753 [2024-06-10 13:47:36.002587] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:21.753 [2024-06-10 13:47:36.002607] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:21.753 [2024-06-10 13:47:36.002619] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:21.753 [2024-06-10 13:47:36.004157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.753 passed 00:19:21.754 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 13:47:36.098861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.754 [2024-06-10 13:47:36.101873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.754 passed 00:19:21.754 Test: admin_identify_ns ...[2024-06-10 13:47:36.198232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.012 [2024-06-10 13:47:36.257589] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:22.012 [2024-06-10 13:47:36.265595] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:22.012 [2024-06-10 13:47:36.286716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.012 passed 00:19:22.012 Test: admin_get_features_mandatory_features ...[2024-06-10 13:47:36.378684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.012 [2024-06-10 13:47:36.381713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.012 passed 00:19:22.012 Test: admin_get_features_optional_features ...[2024-06-10 13:47:36.474322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.012 [2024-06-10 13:47:36.477340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.271 passed 00:19:22.271 Test: admin_set_features_number_of_queues ...[2024-06-10 13:47:36.569813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.271 [2024-06-10 13:47:36.673694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.271 passed 00:19:22.529 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 13:47:36.765209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.529 [2024-06-10 13:47:36.768231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.529 passed 00:19:22.530 Test: admin_get_log_page_with_lpo ...[2024-06-10 13:47:36.860756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.530 [2024-06-10 13:47:36.930589] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:22.530 [2024-06-10 13:47:36.943679] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.530 passed 00:19:22.788 Test: fabric_property_get ...[2024-06-10 13:47:37.032212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.788 [2024-06-10 13:47:37.033522] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:22.788 [2024-06-10 13:47:37.035241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.788 passed 00:19:22.788 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 13:47:37.128991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.788 [2024-06-10 13:47:37.130266] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:22.788 [2024-06-10 13:47:37.132017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.788 passed 00:19:22.788 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 13:47:37.219496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.047 [2024-06-10 13:47:37.300584] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:23.047 [2024-06-10 13:47:37.316585] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:23.047 [2024-06-10 13:47:37.321683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.047 passed 00:19:23.047 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 13:47:37.414630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.047 [2024-06-10 13:47:37.415887] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:23.047 [2024-06-10 13:47:37.417629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.047 passed 00:19:23.047 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 13:47:37.509148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.306 [2024-06-10 13:47:37.584587] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:23.306 [2024-06-10 13:47:37.608593] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:23.306 [2024-06-10 13:47:37.613684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.306 passed 00:19:23.306 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 13:47:37.706203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.306 [2024-06-10 13:47:37.707470] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:23.306 [2024-06-10 13:47:37.707497] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:23.306 [2024-06-10 13:47:37.709223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.306 passed 00:19:23.566 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 13:47:37.798701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.566 [2024-06-10 13:47:37.891600] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:23.566 [2024-06-10 13:47:37.899588] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:23.566 [2024-06-10 13:47:37.907597] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:23.566 [2024-06-10 13:47:37.915594] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:23.566 [2024-06-10 13:47:37.944686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.566 passed 00:19:23.566 Test: admin_create_io_sq_verify_pc ...[2024-06-10 13:47:38.036596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.824 [2024-06-10 13:47:38.052597] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:23.824 [2024-06-10 13:47:38.072853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.824 passed 00:19:23.824 Test: admin_create_io_qp_max_qps ...[2024-06-10 13:47:38.164485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.203 [2024-06-10 13:47:39.263590] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:25.203 [2024-06-10 13:47:39.644083] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.462 passed 00:19:25.462 Test: admin_create_io_sq_shared_cq ...[2024-06-10 13:47:39.733831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.462 [2024-06-10 13:47:39.869584] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:25.462 [2024-06-10 13:47:39.906665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.721 passed 00:19:25.721 00:19:25.721 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.721 suites 1 1 n/a 0 0 00:19:25.721 tests 18 18 18 0 0 00:19:25.721 asserts 360 360 360 0 n/a 00:19:25.721 00:19:25.721 Elapsed time = 1.632 seconds 00:19:25.721 13:47:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1352354 00:19:25.721 13:47:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 1352354 ']' 00:19:25.721 13:47:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 1352354 00:19:25.721 13:47:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:19:25.721 13:47:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:25.721 13:47:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1352354 00:19:25.721 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:25.721 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:25.721 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1352354' 00:19:25.721 killing process with pid 1352354 00:19:25.721 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 1352354 00:19:25.721 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 1352354 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:25.981 00:19:25.981 real 0m6.599s 00:19:25.981 user 0m18.477s 00:19:25.981 sys 0m0.789s 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.981 ************************************ 00:19:25.981 END TEST nvmf_vfio_user_nvme_compliance 00:19:25.981 ************************************ 00:19:25.981 13:47:40 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:25.981 13:47:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:25.981 13:47:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:25.981 13:47:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.981 ************************************ 00:19:25.981 START TEST nvmf_vfio_user_fuzz 00:19:25.981 ************************************ 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:25.981 * Looking for test storage... 00:19:25.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.981 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.982 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1353487 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1353487' 00:19:26.241 Process pid: 1353487 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1353487 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 1353487 ']' 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:26.241 13:47:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.178 13:47:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:27.178 13:47:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:19:27.178 13:47:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.116 malloc0 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:28.116 13:47:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:00.200 Fuzzing completed. Shutting down the fuzz application 00:20:00.200 00:20:00.200 Dumping successful admin opcodes: 00:20:00.200 8, 9, 10, 24, 00:20:00.200 Dumping successful io opcodes: 00:20:00.200 0, 00:20:00.200 NS: 0x200003a1ef00 I/O qp, Total commands completed: 669160, total successful commands: 2612, random_seed: 3416509504 00:20:00.200 NS: 0x200003a1ef00 admin qp, Total commands completed: 164060, total successful commands: 1325, random_seed: 1082286016 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1353487 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 1353487 ']' 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 1353487 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:00.200 13:48:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1353487 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1353487' 00:20:00.200 killing process with pid 1353487 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 1353487 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 1353487 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:00.200 00:20:00.200 real 0m33.029s 00:20:00.200 user 0m30.892s 00:20:00.200 sys 0m31.723s 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:00.200 13:48:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.200 ************************************ 00:20:00.200 END TEST nvmf_vfio_user_fuzz 00:20:00.200 ************************************ 00:20:00.200 13:48:13 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:00.200 13:48:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:00.200 13:48:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:00.200 13:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.200 ************************************ 00:20:00.200 START TEST nvmf_host_management 00:20:00.200 ************************************ 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:00.200 * Looking for test storage... 00:20:00.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.200 13:48:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:08.321 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:08.321 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:08.321 Found net devices under 0000:af:00.0: cvl_0_0 00:20:08.321 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:08.322 Found net devices under 0000:af:00.1: cvl_0_1 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:20:08.322 00:20:08.322 --- 10.0.0.2 ping statistics --- 00:20:08.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.322 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:20:08.322 00:20:08.322 --- 10.0.0.1 ping statistics --- 00:20:08.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.322 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1363738 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1363738 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1363738 ']' 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:08.322 13:48:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:08.322 [2024-06-10 13:48:22.484847] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:20:08.322 [2024-06-10 13:48:22.484914] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.322 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.322 [2024-06-10 13:48:22.603319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.322 [2024-06-10 13:48:22.690330] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.322 [2024-06-10 13:48:22.690374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.322 [2024-06-10 13:48:22.690388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.322 [2024-06-10 13:48:22.690400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.322 [2024-06-10 13:48:22.690410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.322 [2024-06-10 13:48:22.690520] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.322 [2024-06-10 13:48:22.690630] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.322 [2024-06-10 13:48:22.690740] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.322 [2024-06-10 13:48:22.690740] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.259 [2024-06-10 13:48:23.450791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:09.259 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.260 Malloc0 00:20:09.260 [2024-06-10 13:48:23.518754] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1363972 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1363972 /var/tmp/bdevperf.sock 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1363972 ']' 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.260 { 00:20:09.260 "params": { 00:20:09.260 "name": "Nvme$subsystem", 00:20:09.260 "trtype": "$TEST_TRANSPORT", 00:20:09.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.260 "adrfam": "ipv4", 00:20:09.260 "trsvcid": "$NVMF_PORT", 00:20:09.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.260 "hdgst": ${hdgst:-false}, 00:20:09.260 "ddgst": ${ddgst:-false} 00:20:09.260 }, 00:20:09.260 "method": "bdev_nvme_attach_controller" 00:20:09.260 } 00:20:09.260 EOF 00:20:09.260 )") 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:20:09.260 13:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.260 "params": { 00:20:09.260 "name": "Nvme0", 00:20:09.260 "trtype": "tcp", 00:20:09.260 "traddr": "10.0.0.2", 00:20:09.260 "adrfam": "ipv4", 00:20:09.260 "trsvcid": "4420", 00:20:09.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.260 "hdgst": false, 00:20:09.260 "ddgst": false 00:20:09.260 }, 00:20:09.260 "method": "bdev_nvme_attach_controller" 00:20:09.260 }' 00:20:09.260 [2024-06-10 13:48:23.626141] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:20:09.260 [2024-06-10 13:48:23.626203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363972 ] 00:20:09.260 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.519 [2024-06-10 13:48:23.745581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.519 [2024-06-10 13:48:23.826693] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.809 Running I/O for 10 seconds... 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.124 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.384 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.384 [2024-06-10 13:48:24.614481] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e6650 is same with the state(5) to be set 00:20:10.384 [2024-06-10 13:48:24.614532] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e6650 is same with the state(5) to be set 00:20:10.384 [2024-06-10 13:48:24.615061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.384 [2024-06-10 13:48:24.615330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.384 [2024-06-10 13:48:24.615345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.615980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.615996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.385 [2024-06-10 13:48:24.616464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.385 [2024-06-10 13:48:24.616477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.386 [2024-06-10 13:48:24.616978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.616992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab8d70 is same with the state(5) to be set 00:20:10.386 [2024-06-10 13:48:24.617054] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xab8d70 was disconnected and freed. reset controller. 00:20:10.386 [2024-06-10 13:48:24.618261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.386 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.386 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:10.386 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.386 task offset: 84096 on job bdev=Nvme0n1 fails 00:20:10.386 00:20:10.386 Latency(us) 00:20:10.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.386 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.386 Job: Nvme0n1 ended in about 0.54 seconds with error 00:20:10.386 Verification LBA range: start 0x0 length 0x400 00:20:10.386 Nvme0n1 : 0.54 1189.78 74.36 118.98 0.00 47647.29 2529.69 44879.05 00:20:10.386 =================================================================================================================== 00:20:10.386 Total : 1189.78 74.36 118.98 0.00 47647.29 2529.69 44879.05 00:20:10.386 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.386 [2024-06-10 13:48:24.620364] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.386 [2024-06-10 13:48:24.620389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x687820 (9): Bad file descriptor 00:20:10.386 [2024-06-10 13:48:24.622068] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:20:10.386 [2024-06-10 13:48:24.622251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:10.386 [2024-06-10 13:48:24.622281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.386 [2024-06-10 13:48:24.622304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:20:10.386 [2024-06-10 13:48:24.622319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:20:10.386 [2024-06-10 13:48:24.622333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:10.386 [2024-06-10 13:48:24.622346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x687820 00:20:10.386 [2024-06-10 13:48:24.622372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x687820 (9): Bad file descriptor 00:20:10.386 [2024-06-10 13:48:24.622391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.386 [2024-06-10 13:48:24.622405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.386 [2024-06-10 13:48:24.622420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.386 [2024-06-10 13:48:24.622438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.386 13:48:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.386 13:48:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:20:11.322 13:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1363972 00:20:11.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1363972) - No such process 00:20:11.322 13:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:20:11.322 13:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.323 { 00:20:11.323 "params": { 00:20:11.323 "name": "Nvme$subsystem", 00:20:11.323 "trtype": "$TEST_TRANSPORT", 00:20:11.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.323 "adrfam": "ipv4", 00:20:11.323 "trsvcid": "$NVMF_PORT", 00:20:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.323 "hdgst": ${hdgst:-false}, 00:20:11.323 "ddgst": ${ddgst:-false} 00:20:11.323 }, 00:20:11.323 "method": "bdev_nvme_attach_controller" 00:20:11.323 } 00:20:11.323 EOF 00:20:11.323 )") 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:20:11.323 13:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.323 "params": { 00:20:11.323 "name": "Nvme0", 00:20:11.323 "trtype": "tcp", 00:20:11.323 "traddr": "10.0.0.2", 00:20:11.323 "adrfam": "ipv4", 00:20:11.323 "trsvcid": "4420", 00:20:11.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.323 "hdgst": false, 00:20:11.323 "ddgst": false 00:20:11.323 }, 00:20:11.323 "method": "bdev_nvme_attach_controller" 00:20:11.323 }' 00:20:11.323 [2024-06-10 13:48:25.687721] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:20:11.323 [2024-06-10 13:48:25.687786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364332 ] 00:20:11.323 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.581 [2024-06-10 13:48:25.807305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.581 [2024-06-10 13:48:25.888245] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.840 Running I/O for 1 seconds... 00:20:12.776 00:20:12.776 Latency(us) 00:20:12.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.776 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.776 Verification LBA range: start 0x0 length 0x400 00:20:12.776 Nvme0n1 : 1.03 1303.20 81.45 0.00 0.00 48176.74 9175.04 42991.62 00:20:12.776 =================================================================================================================== 00:20:12.776 Total : 1303.20 81.45 0.00 0.00 48176.74 9175.04 42991.62 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.035 rmmod nvme_tcp 00:20:13.035 rmmod nvme_fabrics 00:20:13.035 rmmod nvme_keyring 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1363738 ']' 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1363738 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 1363738 ']' 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 1363738 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1363738 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1363738' 00:20:13.035 killing process with pid 1363738 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 1363738 00:20:13.035 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 1363738 00:20:13.294 [2024-06-10 13:48:27.661787] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.294 13:48:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.828 13:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.828 13:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:15.828 00:20:15.828 real 0m16.340s 00:20:15.828 user 0m24.916s 00:20:15.828 sys 0m8.277s 00:20:15.828 13:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:15.828 13:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:15.828 ************************************ 00:20:15.828 END TEST nvmf_host_management 00:20:15.828 ************************************ 00:20:15.828 13:48:29 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:15.828 13:48:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:15.828 13:48:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:15.828 13:48:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.828 ************************************ 00:20:15.828 START TEST nvmf_lvol 00:20:15.828 ************************************ 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:15.828 * Looking for test storage... 00:20:15.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.828 13:48:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.829 13:48:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.829 13:48:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.829 13:48:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.829 13:48:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.829 13:48:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:23.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:23.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:23.946 Found net devices under 0000:af:00.0: cvl_0_0 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.946 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:23.947 Found net devices under 0000:af:00.1: cvl_0_1 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.947 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.206 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:20:24.465 00:20:24.465 --- 10.0.0.2 ping statistics --- 00:20:24.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.465 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:20:24.465 00:20:24.465 --- 10.0.0.1 ping statistics --- 00:20:24.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.465 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1369050 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1369050 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 1369050 ']' 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:24.465 13:48:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:24.465 [2024-06-10 13:48:38.808753] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:20:24.465 [2024-06-10 13:48:38.808810] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.465 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.724 [2024-06-10 13:48:38.937610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.724 [2024-06-10 13:48:39.020410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.724 [2024-06-10 13:48:39.020458] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.724 [2024-06-10 13:48:39.020472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.724 [2024-06-10 13:48:39.020484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.724 [2024-06-10 13:48:39.020494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.724 [2024-06-10 13:48:39.020550] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.724 [2024-06-10 13:48:39.020660] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.724 [2024-06-10 13:48:39.020666] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.291 13:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:25.549 [2024-06-10 13:48:39.914992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.549 13:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:25.807 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:25.807 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:26.066 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:26.066 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:26.325 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:26.584 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7ecfcd43-8fba-489f-a213-64e1d99c4c68 00:20:26.584 13:48:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ecfcd43-8fba-489f-a213-64e1d99c4c68 lvol 20 00:20:26.843 13:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dc82348a-130d-4504-a4d1-21c869482796 00:20:26.843 13:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:27.101 13:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc82348a-130d-4504-a4d1-21c869482796 00:20:27.360 13:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.618 [2024-06-10 13:48:41.893927] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.618 13:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.876 13:48:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1369637 00:20:27.876 13:48:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:27.876 13:48:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:27.876 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.813 13:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dc82348a-130d-4504-a4d1-21c869482796 MY_SNAPSHOT 00:20:29.072 13:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9c635fe6-60fe-4195-9f60-e2476e3a08a1 00:20:29.072 13:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dc82348a-130d-4504-a4d1-21c869482796 30 00:20:29.330 13:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9c635fe6-60fe-4195-9f60-e2476e3a08a1 MY_CLONE 00:20:29.589 13:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3341af93-7ab7-4999-81bc-59ba0ff9a55a 00:20:29.589 13:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3341af93-7ab7-4999-81bc-59ba0ff9a55a 00:20:29.848 13:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1369637 00:20:39.826 Initializing NVMe Controllers 00:20:39.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:39.826 Controller IO queue size 128, less than required. 00:20:39.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:20:39.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:20:39.826 Initialization complete. Launching workers. 00:20:39.826 ======================================================== 00:20:39.826 Latency(us) 00:20:39.826 Device Information : IOPS MiB/s Average min max 00:20:39.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9860.20 38.52 12985.61 2357.36 74387.37 00:20:39.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9379.10 36.64 13652.63 3563.11 53640.31 00:20:39.826 ======================================================== 00:20:39.826 Total : 19239.30 75.15 13310.78 2357.36 74387.37 00:20:39.826 00:20:39.826 13:48:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:39.826 13:48:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc82348a-130d-4504-a4d1-21c869482796 00:20:39.826 13:48:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ecfcd43-8fba-489f-a213-64e1d99c4c68 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.826 rmmod nvme_tcp 00:20:39.826 rmmod nvme_fabrics 00:20:39.826 rmmod nvme_keyring 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1369050 ']' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1369050 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 1369050 ']' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 1369050 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1369050 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1369050' 00:20:39.826 killing process with pid 1369050 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 1369050 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 1369050 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.826 13:48:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.204 13:48:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.204 00:20:41.204 real 0m25.758s 00:20:41.204 user 1m5.567s 00:20:41.204 sys 0m11.614s 00:20:41.204 13:48:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:41.204 13:48:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:41.204 ************************************ 00:20:41.204 END TEST nvmf_lvol 00:20:41.204 ************************************ 00:20:41.204 13:48:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:41.204 13:48:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:41.204 13:48:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:41.204 13:48:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:41.463 ************************************ 00:20:41.463 START TEST nvmf_lvs_grow 00:20:41.463 ************************************ 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:41.463 * Looking for test storage... 00:20:41.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.463 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.464 13:48:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:51.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:51.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:51.498 Found net devices under 0000:af:00.0: cvl_0_0 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:51.498 Found net devices under 0000:af:00.1: cvl_0_1 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.498 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:51.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:20:51.498 00:20:51.498 --- 10.0.0.2 ping statistics --- 00:20:51.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.499 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:20:51.499 00:20:51.499 --- 10.0.0.1 ping statistics --- 00:20:51.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.499 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1376173 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1376173 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 1376173 ']' 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:51.499 13:49:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:51.499 [2024-06-10 13:49:04.875541] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:20:51.499 [2024-06-10 13:49:04.875604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.499 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.499 [2024-06-10 13:49:05.000566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.499 [2024-06-10 13:49:05.083979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.499 [2024-06-10 13:49:05.084024] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.499 [2024-06-10 13:49:05.084037] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.499 [2024-06-10 13:49:05.084049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.499 [2024-06-10 13:49:05.084059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.499 [2024-06-10 13:49:05.084088] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.499 13:49:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:51.758 [2024-06-10 13:49:06.029239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:51.758 ************************************ 00:20:51.758 START TEST lvs_grow_clean 00:20:51.758 ************************************ 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:51.758 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:52.017 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:52.017 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:52.276 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:20:52.276 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:20:52.276 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:52.535 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:52.535 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:52.535 13:49:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf lvol 150 00:20:52.794 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b34e49b0-310e-43f4-8fae-c21a5e07a67a 00:20:52.794 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:52.794 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:53.053 [2024-06-10 13:49:07.266018] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:53.053 [2024-06-10 13:49:07.266084] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:53.053 true 00:20:53.053 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:20:53.053 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:53.053 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:53.053 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:53.313 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b34e49b0-310e-43f4-8fae-c21a5e07a67a 00:20:53.572 13:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:53.831 [2024-06-10 13:49:08.156806] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.831 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:54.090 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1376757 00:20:54.090 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.090 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1376757 /var/tmp/bdevperf.sock 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 1376757 ']' 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:54.091 13:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.091 [2024-06-10 13:49:08.451487] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:20:54.091 [2024-06-10 13:49:08.451552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376757 ] 00:20:54.091 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.091 [2024-06-10 13:49:08.561326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.350 [2024-06-10 13:49:08.648289] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.917 13:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:54.917 13:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:20:54.917 13:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:55.484 Nvme0n1 00:20:55.484 13:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:55.743 [ 00:20:55.743 { 00:20:55.743 "name": "Nvme0n1", 00:20:55.743 "aliases": [ 00:20:55.743 "b34e49b0-310e-43f4-8fae-c21a5e07a67a" 00:20:55.743 ], 00:20:55.743 "product_name": "NVMe disk", 00:20:55.743 "block_size": 4096, 00:20:55.743 "num_blocks": 38912, 00:20:55.743 "uuid": "b34e49b0-310e-43f4-8fae-c21a5e07a67a", 00:20:55.743 "assigned_rate_limits": { 00:20:55.743 "rw_ios_per_sec": 0, 00:20:55.743 "rw_mbytes_per_sec": 0, 00:20:55.743 "r_mbytes_per_sec": 0, 00:20:55.743 "w_mbytes_per_sec": 0 00:20:55.743 }, 00:20:55.743 "claimed": false, 00:20:55.743 "zoned": false, 00:20:55.743 "supported_io_types": { 00:20:55.743 "read": true, 00:20:55.743 "write": true, 00:20:55.743 "unmap": true, 00:20:55.743 "write_zeroes": true, 00:20:55.743 "flush": true, 00:20:55.743 "reset": true, 00:20:55.743 "compare": true, 00:20:55.743 "compare_and_write": true, 00:20:55.743 "abort": true, 00:20:55.743 "nvme_admin": true, 00:20:55.743 "nvme_io": true 00:20:55.743 }, 00:20:55.743 "memory_domains": [ 00:20:55.743 { 00:20:55.743 "dma_device_id": "system", 00:20:55.743 "dma_device_type": 1 00:20:55.743 } 00:20:55.743 ], 00:20:55.743 "driver_specific": { 00:20:55.743 "nvme": [ 00:20:55.743 { 00:20:55.743 "trid": { 00:20:55.743 "trtype": "TCP", 00:20:55.743 "adrfam": "IPv4", 00:20:55.743 "traddr": "10.0.0.2", 00:20:55.743 "trsvcid": "4420", 00:20:55.743 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:55.743 }, 00:20:55.743 "ctrlr_data": { 00:20:55.743 "cntlid": 1, 00:20:55.743 "vendor_id": "0x8086", 00:20:55.743 "model_number": "SPDK bdev Controller", 00:20:55.743 "serial_number": "SPDK0", 00:20:55.743 "firmware_revision": "24.09", 00:20:55.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:55.744 "oacs": { 00:20:55.744 "security": 0, 00:20:55.744 "format": 0, 00:20:55.744 "firmware": 0, 00:20:55.744 "ns_manage": 0 00:20:55.744 }, 00:20:55.744 "multi_ctrlr": true, 00:20:55.744 "ana_reporting": false 00:20:55.744 }, 00:20:55.744 "vs": { 00:20:55.744 "nvme_version": "1.3" 00:20:55.744 }, 00:20:55.744 "ns_data": { 00:20:55.744 "id": 1, 00:20:55.744 "can_share": true 00:20:55.744 } 00:20:55.744 } 00:20:55.744 ], 00:20:55.744 "mp_policy": "active_passive" 00:20:55.744 } 00:20:55.744 } 00:20:55.744 ] 00:20:55.744 13:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1377025 00:20:55.744 13:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:55.744 13:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:55.744 Running I/O for 10 seconds... 00:20:56.690 Latency(us) 00:20:56.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:56.690 Nvme0n1 : 1.00 16278.00 63.59 0.00 0.00 0.00 0.00 0.00 00:20:56.690 =================================================================================================================== 00:20:56.690 Total : 16278.00 63.59 0.00 0.00 0.00 0.00 0.00 00:20:56.690 00:20:57.627 13:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:20:57.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:57.886 Nvme0n1 : 2.00 16407.00 64.09 0.00 0.00 0.00 0.00 0.00 00:20:57.886 =================================================================================================================== 00:20:57.886 Total : 16407.00 64.09 0.00 0.00 0.00 0.00 0.00 00:20:57.886 00:20:57.886 true 00:20:57.886 13:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:20:57.886 13:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:58.144 13:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:58.144 13:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:58.144 13:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1377025 00:20:58.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:58.712 Nvme0n1 : 3.00 16455.33 64.28 0.00 0.00 0.00 0.00 0.00 00:20:58.712 =================================================================================================================== 00:20:58.712 Total : 16455.33 64.28 0.00 0.00 0.00 0.00 0.00 00:20:58.712 00:21:00.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:00.091 Nvme0n1 : 4.00 16497.50 64.44 0.00 0.00 0.00 0.00 0.00 00:21:00.091 =================================================================================================================== 00:21:00.091 Total : 16497.50 64.44 0.00 0.00 0.00 0.00 0.00 00:21:00.091 00:21:01.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:01.029 Nvme0n1 : 5.00 16534.00 64.59 0.00 0.00 0.00 0.00 0.00 00:21:01.029 =================================================================================================================== 00:21:01.029 Total : 16534.00 64.59 0.00 0.00 0.00 0.00 0.00 00:21:01.029 00:21:01.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:01.968 Nvme0n1 : 6.00 16559.67 64.69 0.00 0.00 0.00 0.00 0.00 00:21:01.968 =================================================================================================================== 00:21:01.968 Total : 16559.67 64.69 0.00 0.00 0.00 0.00 0.00 00:21:01.968 00:21:02.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:02.906 Nvme0n1 : 7.00 16582.57 64.78 0.00 0.00 0.00 0.00 0.00 00:21:02.906 =================================================================================================================== 00:21:02.906 Total : 16582.57 64.78 0.00 0.00 0.00 0.00 0.00 00:21:02.906 00:21:03.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:03.845 Nvme0n1 : 8.00 16602.75 64.85 0.00 0.00 0.00 0.00 0.00 00:21:03.845 =================================================================================================================== 00:21:03.845 Total : 16602.75 64.85 0.00 0.00 0.00 0.00 0.00 00:21:03.845 00:21:04.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.781 Nvme0n1 : 9.00 16617.56 64.91 0.00 0.00 0.00 0.00 0.00 00:21:04.781 =================================================================================================================== 00:21:04.781 Total : 16617.56 64.91 0.00 0.00 0.00 0.00 0.00 00:21:04.781 00:21:05.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:05.720 Nvme0n1 : 10.00 16622.20 64.93 0.00 0.00 0.00 0.00 0.00 00:21:05.720 =================================================================================================================== 00:21:05.720 Total : 16622.20 64.93 0.00 0.00 0.00 0.00 0.00 00:21:05.720 00:21:05.720 00:21:05.720 Latency(us) 00:21:05.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:05.720 Nvme0n1 : 10.01 16621.69 64.93 0.00 0.00 7694.34 5924.45 15833.50 00:21:05.720 =================================================================================================================== 00:21:05.720 Total : 16621.69 64.93 0.00 0.00 7694.34 5924.45 15833.50 00:21:05.720 0 00:21:05.720 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1376757 00:21:05.720 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 1376757 ']' 00:21:05.720 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 1376757 00:21:05.720 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:21:05.720 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:05.720 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1376757 00:21:05.979 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:05.979 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:05.979 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1376757' 00:21:05.979 killing process with pid 1376757 00:21:05.979 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 1376757 00:21:05.979 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.979 00:21:05.979 Latency(us) 00:21:05.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.979 =================================================================================================================== 00:21:05.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.979 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 1376757 00:21:05.979 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:06.238 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:06.497 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:06.497 13:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:06.757 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:06.757 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:21:06.757 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:07.016 [2024-06-10 13:49:21.348200] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:07.016 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:07.276 request: 00:21:07.276 { 00:21:07.276 "uuid": "33f0794b-85b6-43eb-8b84-69c8bc3300cf", 00:21:07.276 "method": "bdev_lvol_get_lvstores", 00:21:07.276 "req_id": 1 00:21:07.276 } 00:21:07.276 Got JSON-RPC error response 00:21:07.276 response: 00:21:07.276 { 00:21:07.276 "code": -19, 00:21:07.276 "message": "No such device" 00:21:07.276 } 00:21:07.276 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:21:07.276 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:07.276 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:07.276 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:07.276 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:07.535 aio_bdev 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b34e49b0-310e-43f4-8fae-c21a5e07a67a 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=b34e49b0-310e-43f4-8fae-c21a5e07a67a 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:07.535 13:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:07.795 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b34e49b0-310e-43f4-8fae-c21a5e07a67a -t 2000 00:21:08.054 [ 00:21:08.054 { 00:21:08.054 "name": "b34e49b0-310e-43f4-8fae-c21a5e07a67a", 00:21:08.054 "aliases": [ 00:21:08.054 "lvs/lvol" 00:21:08.054 ], 00:21:08.054 "product_name": "Logical Volume", 00:21:08.054 "block_size": 4096, 00:21:08.054 "num_blocks": 38912, 00:21:08.054 "uuid": "b34e49b0-310e-43f4-8fae-c21a5e07a67a", 00:21:08.054 "assigned_rate_limits": { 00:21:08.054 "rw_ios_per_sec": 0, 00:21:08.054 "rw_mbytes_per_sec": 0, 00:21:08.054 "r_mbytes_per_sec": 0, 00:21:08.054 "w_mbytes_per_sec": 0 00:21:08.054 }, 00:21:08.054 "claimed": false, 00:21:08.054 "zoned": false, 00:21:08.054 "supported_io_types": { 00:21:08.054 "read": true, 00:21:08.054 "write": true, 00:21:08.054 "unmap": true, 00:21:08.054 "write_zeroes": true, 00:21:08.054 "flush": false, 00:21:08.054 "reset": true, 00:21:08.054 "compare": false, 00:21:08.054 "compare_and_write": false, 00:21:08.054 "abort": false, 00:21:08.054 "nvme_admin": false, 00:21:08.054 "nvme_io": false 00:21:08.054 }, 00:21:08.054 "driver_specific": { 00:21:08.054 "lvol": { 00:21:08.054 "lvol_store_uuid": "33f0794b-85b6-43eb-8b84-69c8bc3300cf", 00:21:08.054 "base_bdev": "aio_bdev", 00:21:08.054 "thin_provision": false, 00:21:08.054 "num_allocated_clusters": 38, 00:21:08.054 "snapshot": false, 00:21:08.054 "clone": false, 00:21:08.054 "esnap_clone": false 00:21:08.054 } 00:21:08.054 } 00:21:08.054 } 00:21:08.054 ] 00:21:08.054 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:21:08.054 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:08.054 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:08.314 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:08.314 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:08.314 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:08.573 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:08.573 13:49:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b34e49b0-310e-43f4-8fae-c21a5e07a67a 00:21:08.573 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33f0794b-85b6-43eb-8b84-69c8bc3300cf 00:21:08.832 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:09.090 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:09.090 00:21:09.090 real 0m17.445s 00:21:09.090 user 0m16.520s 00:21:09.090 sys 0m2.385s 00:21:09.090 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:09.090 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:21:09.090 ************************************ 00:21:09.090 END TEST lvs_grow_clean 00:21:09.090 ************************************ 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:09.349 ************************************ 00:21:09.349 START TEST lvs_grow_dirty 00:21:09.349 ************************************ 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:09.349 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:09.350 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:09.609 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:09.609 13:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:09.868 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:09.868 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:09.868 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:09.868 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:09.868 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:10.126 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 lvol 150 00:21:10.126 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:10.126 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:10.126 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:10.385 [2024-06-10 13:49:24.783992] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:10.385 [2024-06-10 13:49:24.784057] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:10.385 true 00:21:10.385 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:10.385 13:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:10.645 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:10.645 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:10.904 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:11.163 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:11.422 [2024-06-10 13:49:25.690759] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.422 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:11.681 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1379744 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1379744 /var/tmp/bdevperf.sock 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1379744 ']' 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:11.682 13:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 [2024-06-10 13:49:25.984401] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:11.682 [2024-06-10 13:49:25.984466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379744 ] 00:21:11.682 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.682 [2024-06-10 13:49:26.093703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.955 [2024-06-10 13:49:26.176521] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.538 13:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:12.538 13:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:21:12.538 13:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:12.797 Nvme0n1 00:21:12.797 13:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:13.056 [ 00:21:13.056 { 00:21:13.056 "name": "Nvme0n1", 00:21:13.056 "aliases": [ 00:21:13.056 "d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b" 00:21:13.056 ], 00:21:13.056 "product_name": "NVMe disk", 00:21:13.056 "block_size": 4096, 00:21:13.056 "num_blocks": 38912, 00:21:13.056 "uuid": "d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b", 00:21:13.056 "assigned_rate_limits": { 00:21:13.056 "rw_ios_per_sec": 0, 00:21:13.056 "rw_mbytes_per_sec": 0, 00:21:13.056 "r_mbytes_per_sec": 0, 00:21:13.056 "w_mbytes_per_sec": 0 00:21:13.056 }, 00:21:13.056 "claimed": false, 00:21:13.056 "zoned": false, 00:21:13.056 "supported_io_types": { 00:21:13.056 "read": true, 00:21:13.056 "write": true, 00:21:13.056 "unmap": true, 00:21:13.056 "write_zeroes": true, 00:21:13.056 "flush": true, 00:21:13.056 "reset": true, 00:21:13.056 "compare": true, 00:21:13.056 "compare_and_write": true, 00:21:13.056 "abort": true, 00:21:13.056 "nvme_admin": true, 00:21:13.056 "nvme_io": true 00:21:13.056 }, 00:21:13.056 "memory_domains": [ 00:21:13.056 { 00:21:13.056 "dma_device_id": "system", 00:21:13.056 "dma_device_type": 1 00:21:13.056 } 00:21:13.056 ], 00:21:13.056 "driver_specific": { 00:21:13.056 "nvme": [ 00:21:13.056 { 00:21:13.056 "trid": { 00:21:13.056 "trtype": "TCP", 00:21:13.056 "adrfam": "IPv4", 00:21:13.056 "traddr": "10.0.0.2", 00:21:13.056 "trsvcid": "4420", 00:21:13.056 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:13.056 }, 00:21:13.056 "ctrlr_data": { 00:21:13.056 "cntlid": 1, 00:21:13.056 "vendor_id": "0x8086", 00:21:13.056 "model_number": "SPDK bdev Controller", 00:21:13.056 "serial_number": "SPDK0", 00:21:13.056 "firmware_revision": "24.09", 00:21:13.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:13.056 "oacs": { 00:21:13.056 "security": 0, 00:21:13.056 "format": 0, 00:21:13.056 "firmware": 0, 00:21:13.056 "ns_manage": 0 00:21:13.056 }, 00:21:13.056 "multi_ctrlr": true, 00:21:13.056 "ana_reporting": false 00:21:13.056 }, 00:21:13.056 "vs": { 00:21:13.056 "nvme_version": "1.3" 00:21:13.056 }, 00:21:13.056 "ns_data": { 00:21:13.056 "id": 1, 00:21:13.056 "can_share": true 00:21:13.056 } 00:21:13.056 } 00:21:13.056 ], 00:21:13.056 "mp_policy": "active_passive" 00:21:13.056 } 00:21:13.056 } 00:21:13.056 ] 00:21:13.056 13:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1380015 00:21:13.056 13:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:13.056 13:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.056 Running I/O for 10 seconds... 00:21:14.434 Latency(us) 00:21:14.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:14.435 Nvme0n1 : 1.00 16923.00 66.11 0.00 0.00 0.00 0.00 0.00 00:21:14.435 =================================================================================================================== 00:21:14.435 Total : 16923.00 66.11 0.00 0.00 0.00 0.00 0.00 00:21:14.435 00:21:15.010 13:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:15.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.268 Nvme0n1 : 2.00 17070.00 66.68 0.00 0.00 0.00 0.00 0.00 00:21:15.268 =================================================================================================================== 00:21:15.268 Total : 17070.00 66.68 0.00 0.00 0.00 0.00 0.00 00:21:15.268 00:21:15.268 true 00:21:15.268 13:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:15.268 13:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:15.526 13:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:15.526 13:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:15.526 13:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1380015 00:21:16.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:16.093 Nvme0n1 : 3.00 17096.33 66.78 0.00 0.00 0.00 0.00 0.00 00:21:16.093 =================================================================================================================== 00:21:16.093 Total : 17096.33 66.78 0.00 0.00 0.00 0.00 0.00 00:21:16.093 00:21:17.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:17.469 Nvme0n1 : 4.00 17158.25 67.02 0.00 0.00 0.00 0.00 0.00 00:21:17.469 =================================================================================================================== 00:21:17.469 Total : 17158.25 67.02 0.00 0.00 0.00 0.00 0.00 00:21:17.469 00:21:18.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:18.405 Nvme0n1 : 5.00 17195.60 67.17 0.00 0.00 0.00 0.00 0.00 00:21:18.405 =================================================================================================================== 00:21:18.405 Total : 17195.60 67.17 0.00 0.00 0.00 0.00 0.00 00:21:18.405 00:21:19.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:19.341 Nvme0n1 : 6.00 17223.17 67.28 0.00 0.00 0.00 0.00 0.00 00:21:19.341 =================================================================================================================== 00:21:19.341 Total : 17223.17 67.28 0.00 0.00 0.00 0.00 0.00 00:21:19.341 00:21:20.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:20.277 Nvme0n1 : 7.00 17249.57 67.38 0.00 0.00 0.00 0.00 0.00 00:21:20.277 =================================================================================================================== 00:21:20.277 Total : 17249.57 67.38 0.00 0.00 0.00 0.00 0.00 00:21:20.277 00:21:21.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:21.213 Nvme0n1 : 8.00 17275.12 67.48 0.00 0.00 0.00 0.00 0.00 00:21:21.213 =================================================================================================================== 00:21:21.213 Total : 17275.12 67.48 0.00 0.00 0.00 0.00 0.00 00:21:21.213 00:21:22.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:22.148 Nvme0n1 : 9.00 17290.00 67.54 0.00 0.00 0.00 0.00 0.00 00:21:22.148 =================================================================================================================== 00:21:22.148 Total : 17290.00 67.54 0.00 0.00 0.00 0.00 0.00 00:21:22.148 00:21:23.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.084 Nvme0n1 : 10.00 17301.70 67.58 0.00 0.00 0.00 0.00 0.00 00:21:23.084 =================================================================================================================== 00:21:23.084 Total : 17301.70 67.58 0.00 0.00 0.00 0.00 0.00 00:21:23.084 00:21:23.343 00:21:23.343 Latency(us) 00:21:23.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.343 Nvme0n1 : 10.01 17304.95 67.60 0.00 0.00 7391.77 4587.52 17720.93 00:21:23.343 =================================================================================================================== 00:21:23.343 Total : 17304.95 67.60 0.00 0.00 7391.77 4587.52 17720.93 00:21:23.343 0 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1379744 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 1379744 ']' 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 1379744 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1379744 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1379744' 00:21:23.343 killing process with pid 1379744 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 1379744 00:21:23.343 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.343 00:21:23.343 Latency(us) 00:21:23.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.343 =================================================================================================================== 00:21:23.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.343 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 1379744 00:21:23.601 13:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:23.601 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:23.858 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:23.858 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1376173 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1376173 00:21:24.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1376173 Killed "${NVMF_APP[@]}" "$@" 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1381886 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1381886 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1381886 ']' 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:24.117 13:49:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:24.376 [2024-06-10 13:49:38.630889] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:24.376 [2024-06-10 13:49:38.630954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.376 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.376 [2024-06-10 13:49:38.761326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.376 [2024-06-10 13:49:38.844902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.376 [2024-06-10 13:49:38.844947] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.376 [2024-06-10 13:49:38.844961] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.376 [2024-06-10 13:49:38.844972] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.376 [2024-06-10 13:49:38.844982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.376 [2024-06-10 13:49:38.845009] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.310 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:25.310 [2024-06-10 13:49:39.776482] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:25.310 [2024-06-10 13:49:39.776590] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:25.310 [2024-06-10 13:49:39.776628] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:25.569 13:49:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:25.569 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b -t 2000 00:21:25.828 [ 00:21:25.828 { 00:21:25.828 "name": "d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b", 00:21:25.828 "aliases": [ 00:21:25.828 "lvs/lvol" 00:21:25.828 ], 00:21:25.828 "product_name": "Logical Volume", 00:21:25.828 "block_size": 4096, 00:21:25.828 "num_blocks": 38912, 00:21:25.828 "uuid": "d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b", 00:21:25.828 "assigned_rate_limits": { 00:21:25.828 "rw_ios_per_sec": 0, 00:21:25.828 "rw_mbytes_per_sec": 0, 00:21:25.828 "r_mbytes_per_sec": 0, 00:21:25.828 "w_mbytes_per_sec": 0 00:21:25.828 }, 00:21:25.828 "claimed": false, 00:21:25.828 "zoned": false, 00:21:25.828 "supported_io_types": { 00:21:25.828 "read": true, 00:21:25.828 "write": true, 00:21:25.828 "unmap": true, 00:21:25.828 "write_zeroes": true, 00:21:25.828 "flush": false, 00:21:25.828 "reset": true, 00:21:25.828 "compare": false, 00:21:25.828 "compare_and_write": false, 00:21:25.828 "abort": false, 00:21:25.828 "nvme_admin": false, 00:21:25.828 "nvme_io": false 00:21:25.828 }, 00:21:25.828 "driver_specific": { 00:21:25.828 "lvol": { 00:21:25.828 "lvol_store_uuid": "90d33b14-9bda-48bd-be47-f73eb3b69f27", 00:21:25.828 "base_bdev": "aio_bdev", 00:21:25.828 "thin_provision": false, 00:21:25.828 "num_allocated_clusters": 38, 00:21:25.828 "snapshot": false, 00:21:25.828 "clone": false, 00:21:25.828 "esnap_clone": false 00:21:25.828 } 00:21:25.828 } 00:21:25.828 } 00:21:25.828 ] 00:21:25.828 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:21:25.828 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:25.828 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:21:26.086 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:21:26.086 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:26.086 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:21:26.344 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:21:26.344 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:26.603 [2024-06-10 13:49:40.916946] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:26.603 13:49:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:26.860 request: 00:21:26.860 { 00:21:26.860 "uuid": "90d33b14-9bda-48bd-be47-f73eb3b69f27", 00:21:26.860 "method": "bdev_lvol_get_lvstores", 00:21:26.860 "req_id": 1 00:21:26.860 } 00:21:26.860 Got JSON-RPC error response 00:21:26.860 response: 00:21:26.860 { 00:21:26.860 "code": -19, 00:21:26.860 "message": "No such device" 00:21:26.860 } 00:21:26.860 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:21:26.860 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:26.860 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:26.860 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:26.860 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:27.118 aio_bdev 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:27.118 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:27.377 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b -t 2000 00:21:27.635 [ 00:21:27.635 { 00:21:27.635 "name": "d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b", 00:21:27.635 "aliases": [ 00:21:27.635 "lvs/lvol" 00:21:27.635 ], 00:21:27.635 "product_name": "Logical Volume", 00:21:27.635 "block_size": 4096, 00:21:27.635 "num_blocks": 38912, 00:21:27.635 "uuid": "d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b", 00:21:27.635 "assigned_rate_limits": { 00:21:27.635 "rw_ios_per_sec": 0, 00:21:27.635 "rw_mbytes_per_sec": 0, 00:21:27.635 "r_mbytes_per_sec": 0, 00:21:27.635 "w_mbytes_per_sec": 0 00:21:27.635 }, 00:21:27.635 "claimed": false, 00:21:27.635 "zoned": false, 00:21:27.635 "supported_io_types": { 00:21:27.635 "read": true, 00:21:27.635 "write": true, 00:21:27.635 "unmap": true, 00:21:27.635 "write_zeroes": true, 00:21:27.635 "flush": false, 00:21:27.635 "reset": true, 00:21:27.635 "compare": false, 00:21:27.635 "compare_and_write": false, 00:21:27.635 "abort": false, 00:21:27.635 "nvme_admin": false, 00:21:27.635 "nvme_io": false 00:21:27.635 }, 00:21:27.635 "driver_specific": { 00:21:27.635 "lvol": { 00:21:27.635 "lvol_store_uuid": "90d33b14-9bda-48bd-be47-f73eb3b69f27", 00:21:27.635 "base_bdev": "aio_bdev", 00:21:27.635 "thin_provision": false, 00:21:27.635 "num_allocated_clusters": 38, 00:21:27.635 "snapshot": false, 00:21:27.635 "clone": false, 00:21:27.635 "esnap_clone": false 00:21:27.635 } 00:21:27.635 } 00:21:27.635 } 00:21:27.635 ] 00:21:27.635 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:21:27.635 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:27.635 13:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:27.635 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:27.635 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:27.635 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:27.893 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:27.893 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2c9e1d4-e6df-49fd-b951-4f33dd44fe1b 00:21:28.151 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90d33b14-9bda-48bd-be47-f73eb3b69f27 00:21:28.409 13:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:28.668 00:21:28.668 real 0m19.441s 00:21:28.668 user 0m48.800s 00:21:28.668 sys 0m5.055s 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:28.668 ************************************ 00:21:28.668 END TEST lvs_grow_dirty 00:21:28.668 ************************************ 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:28.668 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.668 nvmf_trace.0 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.927 rmmod nvme_tcp 00:21:28.927 rmmod nvme_fabrics 00:21:28.927 rmmod nvme_keyring 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1381886 ']' 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1381886 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 1381886 ']' 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 1381886 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1381886 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1381886' 00:21:28.927 killing process with pid 1381886 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 1381886 00:21:28.927 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 1381886 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.187 13:49:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.092 13:49:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.352 00:21:31.352 real 0m49.846s 00:21:31.352 user 1m13.047s 00:21:31.352 sys 0m14.887s 00:21:31.352 13:49:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:31.352 13:49:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:31.352 ************************************ 00:21:31.352 END TEST nvmf_lvs_grow 00:21:31.352 ************************************ 00:21:31.352 13:49:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:31.352 13:49:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:31.352 13:49:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:31.352 13:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.352 ************************************ 00:21:31.352 START TEST nvmf_bdev_io_wait 00:21:31.352 ************************************ 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:31.352 * Looking for test storage... 00:21:31.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.352 13:49:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:41.402 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:41.402 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:41.402 Found net devices under 0000:af:00.0: cvl_0_0 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:41.402 Found net devices under 0000:af:00.1: cvl_0_1 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.402 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:21:41.403 00:21:41.403 --- 10.0.0.2 ping statistics --- 00:21:41.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.403 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:21:41.403 00:21:41.403 --- 10.0.0.1 ping statistics --- 00:21:41.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.403 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1387186 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1387186 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 1387186 ']' 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:41.403 13:49:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 [2024-06-10 13:49:54.550475] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:41.403 [2024-06-10 13:49:54.550549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.403 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.403 [2024-06-10 13:49:54.677886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.403 [2024-06-10 13:49:54.761690] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.403 [2024-06-10 13:49:54.761738] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.403 [2024-06-10 13:49:54.761756] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.403 [2024-06-10 13:49:54.761768] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.403 [2024-06-10 13:49:54.761779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.403 [2024-06-10 13:49:54.761841] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.403 [2024-06-10 13:49:54.761935] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.403 [2024-06-10 13:49:54.762046] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.403 [2024-06-10 13:49:54.762046] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 [2024-06-10 13:49:55.579399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 Malloc0 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:41.403 [2024-06-10 13:49:55.650427] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1387469 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1387471 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.403 { 00:21:41.403 "params": { 00:21:41.403 "name": "Nvme$subsystem", 00:21:41.403 "trtype": "$TEST_TRANSPORT", 00:21:41.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.403 "adrfam": "ipv4", 00:21:41.403 "trsvcid": "$NVMF_PORT", 00:21:41.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.403 "hdgst": ${hdgst:-false}, 00:21:41.403 "ddgst": ${ddgst:-false} 00:21:41.403 }, 00:21:41.403 "method": "bdev_nvme_attach_controller" 00:21:41.403 } 00:21:41.403 EOF 00:21:41.403 )") 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:41.403 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1387473 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.404 { 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme$subsystem", 00:21:41.404 "trtype": "$TEST_TRANSPORT", 00:21:41.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "$NVMF_PORT", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.404 "hdgst": ${hdgst:-false}, 00:21:41.404 "ddgst": ${ddgst:-false} 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 } 00:21:41.404 EOF 00:21:41.404 )") 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1387476 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.404 { 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme$subsystem", 00:21:41.404 "trtype": "$TEST_TRANSPORT", 00:21:41.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "$NVMF_PORT", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.404 "hdgst": ${hdgst:-false}, 00:21:41.404 "ddgst": ${ddgst:-false} 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 } 00:21:41.404 EOF 00:21:41.404 )") 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.404 { 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme$subsystem", 00:21:41.404 "trtype": "$TEST_TRANSPORT", 00:21:41.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "$NVMF_PORT", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.404 "hdgst": ${hdgst:-false}, 00:21:41.404 "ddgst": ${ddgst:-false} 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 } 00:21:41.404 EOF 00:21:41.404 )") 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1387469 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme1", 00:21:41.404 "trtype": "tcp", 00:21:41.404 "traddr": "10.0.0.2", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "4420", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.404 "hdgst": false, 00:21:41.404 "ddgst": false 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 }' 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme1", 00:21:41.404 "trtype": "tcp", 00:21:41.404 "traddr": "10.0.0.2", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "4420", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.404 "hdgst": false, 00:21:41.404 "ddgst": false 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 }' 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme1", 00:21:41.404 "trtype": "tcp", 00:21:41.404 "traddr": "10.0.0.2", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "4420", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.404 "hdgst": false, 00:21:41.404 "ddgst": false 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 }' 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:41.404 13:49:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.404 "params": { 00:21:41.404 "name": "Nvme1", 00:21:41.404 "trtype": "tcp", 00:21:41.404 "traddr": "10.0.0.2", 00:21:41.404 "adrfam": "ipv4", 00:21:41.404 "trsvcid": "4420", 00:21:41.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.404 "hdgst": false, 00:21:41.404 "ddgst": false 00:21:41.404 }, 00:21:41.404 "method": "bdev_nvme_attach_controller" 00:21:41.404 }' 00:21:41.404 [2024-06-10 13:49:55.708128] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:41.404 [2024-06-10 13:49:55.708194] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:41.404 [2024-06-10 13:49:55.709970] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:41.404 [2024-06-10 13:49:55.710031] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:41.404 [2024-06-10 13:49:55.710408] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:41.404 [2024-06-10 13:49:55.710465] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:41.404 [2024-06-10 13:49:55.710539] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:41.404 [2024-06-10 13:49:55.710601] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:41.404 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.684 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.684 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.684 [2024-06-10 13:49:55.950717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.684 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.684 [2024-06-10 13:49:56.010607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.684 [2024-06-10 13:49:56.050702] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.684 [2024-06-10 13:49:56.070713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.684 [2024-06-10 13:49:56.096595] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:21:41.684 [2024-06-10 13:49:56.152852] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:21:41.943 [2024-06-10 13:49:56.171797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.943 Running I/O for 1 seconds... 00:21:41.943 [2024-06-10 13:49:56.275570] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:21:41.943 Running I/O for 1 seconds... 00:21:42.201 Running I/O for 1 seconds... 00:21:42.201 Running I/O for 1 seconds... 00:21:43.158 00:21:43.158 Latency(us) 00:21:43.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.158 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:43.159 Nvme1n1 : 1.01 9788.96 38.24 0.00 0.00 13015.31 8126.46 18350.08 00:21:43.159 =================================================================================================================== 00:21:43.159 Total : 9788.96 38.24 0.00 0.00 13015.31 8126.46 18350.08 00:21:43.159 00:21:43.159 Latency(us) 00:21:43.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.159 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:43.159 Nvme1n1 : 1.01 8478.53 33.12 0.00 0.00 15034.24 7444.89 25375.54 00:21:43.159 =================================================================================================================== 00:21:43.159 Total : 8478.53 33.12 0.00 0.00 15034.24 7444.89 25375.54 00:21:43.159 00:21:43.159 Latency(us) 00:21:43.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.159 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:43.159 Nvme1n1 : 1.00 184648.30 721.28 0.00 0.00 690.35 283.44 835.58 00:21:43.159 =================================================================================================================== 00:21:43.159 Total : 184648.30 721.28 0.00 0.00 690.35 283.44 835.58 00:21:43.159 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1387471 00:21:43.159 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1387473 00:21:43.159 00:21:43.159 Latency(us) 00:21:43.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.159 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:43.159 Nvme1n1 : 1.01 9831.39 38.40 0.00 0.00 12975.41 4430.23 23592.96 00:21:43.159 =================================================================================================================== 00:21:43.159 Total : 9831.39 38.40 0.00 0.00 12975.41 4430.23 23592.96 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1387476 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.418 rmmod nvme_tcp 00:21:43.418 rmmod nvme_fabrics 00:21:43.418 rmmod nvme_keyring 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1387186 ']' 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1387186 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 1387186 ']' 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 1387186 00:21:43.418 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1387186 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1387186' 00:21:43.677 killing process with pid 1387186 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 1387186 00:21:43.677 13:49:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 1387186 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.677 13:49:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.212 13:50:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.212 00:21:46.212 real 0m14.573s 00:21:46.212 user 0m21.324s 00:21:46.212 sys 0m8.934s 00:21:46.212 13:50:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:46.212 13:50:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:46.212 ************************************ 00:21:46.212 END TEST nvmf_bdev_io_wait 00:21:46.212 ************************************ 00:21:46.213 13:50:00 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:46.213 13:50:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:46.213 13:50:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:46.213 13:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.213 ************************************ 00:21:46.213 START TEST nvmf_queue_depth 00:21:46.213 ************************************ 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:46.213 * Looking for test storage... 00:21:46.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.213 13:50:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.329 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:54.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:54.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:54.330 Found net devices under 0000:af:00.0: cvl_0_0 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:54.330 Found net devices under 0000:af:00.1: cvl_0_1 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.330 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.589 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.589 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.589 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.589 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.589 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.589 13:50:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:54.589 00:21:54.589 --- 10.0.0.2 ping statistics --- 00:21:54.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.589 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:21:54.589 00:21:54.589 --- 10.0.0.1 ping statistics --- 00:21:54.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.589 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.589 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1392213 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1392213 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1392213 ']' 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:54.848 13:50:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:54.848 [2024-06-10 13:50:09.125323] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:54.848 [2024-06-10 13:50:09.125387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.848 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.848 [2024-06-10 13:50:09.243555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.107 [2024-06-10 13:50:09.329034] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.107 [2024-06-10 13:50:09.329076] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.107 [2024-06-10 13:50:09.329089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.107 [2024-06-10 13:50:09.329101] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.107 [2024-06-10 13:50:09.329111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.107 [2024-06-10 13:50:09.329140] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 [2024-06-10 13:50:10.081682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 Malloc0 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.674 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 [2024-06-10 13:50:10.141701] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1392490 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1392490 /var/tmp/bdevperf.sock 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1392490 ']' 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:55.933 13:50:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:55.933 [2024-06-10 13:50:10.197634] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:21:55.933 [2024-06-10 13:50:10.197695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392490 ] 00:21:55.933 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.933 [2024-06-10 13:50:10.316128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.933 [2024-06-10 13:50:10.398971] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:56.868 NVMe0n1 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.868 13:50:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.868 Running I/O for 10 seconds... 00:22:09.075 00:22:09.075 Latency(us) 00:22:09.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.075 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:09.075 Verification LBA range: start 0x0 length 0x4000 00:22:09.075 NVMe0n1 : 10.07 9154.50 35.76 0.00 0.00 111415.01 20132.66 78014.05 00:22:09.075 =================================================================================================================== 00:22:09.075 Total : 9154.50 35.76 0.00 0.00 111415.01 20132.66 78014.05 00:22:09.075 0 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1392490 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1392490 ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1392490 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1392490 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1392490' 00:22:09.075 killing process with pid 1392490 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1392490 00:22:09.075 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.075 00:22:09.075 Latency(us) 00:22:09.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.075 =================================================================================================================== 00:22:09.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1392490 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.075 rmmod nvme_tcp 00:22:09.075 rmmod nvme_fabrics 00:22:09.075 rmmod nvme_keyring 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1392213 ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1392213 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1392213 ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1392213 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1392213 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1392213' 00:22:09.075 killing process with pid 1392213 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1392213 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1392213 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.075 13:50:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.643 13:50:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.643 00:22:09.643 real 0m23.721s 00:22:09.643 user 0m25.755s 00:22:09.643 sys 0m8.552s 00:22:09.643 13:50:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:09.643 13:50:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:09.643 ************************************ 00:22:09.643 END TEST nvmf_queue_depth 00:22:09.643 ************************************ 00:22:09.643 13:50:24 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:09.643 13:50:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:09.643 13:50:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:09.643 13:50:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:09.903 ************************************ 00:22:09.903 START TEST nvmf_target_multipath 00:22:09.903 ************************************ 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:09.903 * Looking for test storage... 00:22:09.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.903 13:50:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.904 13:50:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:19.885 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:19.885 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:19.885 Found net devices under 0000:af:00.0: cvl_0_0 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:19.885 Found net devices under 0000:af:00.1: cvl_0_1 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.885 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:22:19.885 00:22:19.885 --- 10.0.0.2 ping statistics --- 00:22:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.885 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:22:19.886 00:22:19.886 --- 10.0.0.1 ping statistics --- 00:22:19.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.886 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:22:19.886 only one NIC for nvmf test 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.886 13:50:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.886 rmmod nvme_tcp 00:22:19.886 rmmod nvme_fabrics 00:22:19.886 rmmod nvme_keyring 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.886 13:50:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.823 00:22:20.823 real 0m11.027s 00:22:20.823 user 0m2.349s 00:22:20.823 sys 0m6.750s 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:20.823 13:50:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:20.823 ************************************ 00:22:20.823 END TEST nvmf_target_multipath 00:22:20.823 ************************************ 00:22:20.823 13:50:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:20.823 13:50:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:20.823 13:50:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:20.823 13:50:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.823 ************************************ 00:22:20.823 START TEST nvmf_zcopy 00:22:20.823 ************************************ 00:22:20.823 13:50:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:21.083 * Looking for test storage... 00:22:21.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.083 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:22:21.084 13:50:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.292 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:29.293 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:29.293 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:29.293 Found net devices under 0000:af:00.0: cvl_0_0 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:29.293 Found net devices under 0000:af:00.1: cvl_0_1 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.293 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:22:29.551 00:22:29.551 --- 10.0.0.2 ping statistics --- 00:22:29.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.551 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:22:29.551 00:22:29.551 --- 10.0.0.1 ping statistics --- 00:22:29.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.551 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:22:29.551 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.552 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.552 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.552 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.552 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.552 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.552 13:50:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1403230 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1403230 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 1403230 ']' 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:29.552 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:29.810 [2024-06-10 13:50:44.070509] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:22:29.810 [2024-06-10 13:50:44.070567] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.810 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.810 [2024-06-10 13:50:44.190229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.810 [2024-06-10 13:50:44.274377] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.810 [2024-06-10 13:50:44.274421] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.810 [2024-06-10 13:50:44.274434] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.810 [2024-06-10 13:50:44.274447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.810 [2024-06-10 13:50:44.274457] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.810 [2024-06-10 13:50:44.274483] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.744 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:30.744 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:22:30.744 13:50:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.744 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:30.744 13:50:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 [2024-06-10 13:50:45.018182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 [2024-06-10 13:50:45.038378] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 malloc0 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.744 { 00:22:30.744 "params": { 00:22:30.744 "name": "Nvme$subsystem", 00:22:30.744 "trtype": "$TEST_TRANSPORT", 00:22:30.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.744 "adrfam": "ipv4", 00:22:30.744 "trsvcid": "$NVMF_PORT", 00:22:30.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.744 "hdgst": ${hdgst:-false}, 00:22:30.744 "ddgst": ${ddgst:-false} 00:22:30.744 }, 00:22:30.744 "method": "bdev_nvme_attach_controller" 00:22:30.744 } 00:22:30.744 EOF 00:22:30.744 )") 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:22:30.744 13:50:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:30.744 "params": { 00:22:30.744 "name": "Nvme1", 00:22:30.744 "trtype": "tcp", 00:22:30.744 "traddr": "10.0.0.2", 00:22:30.744 "adrfam": "ipv4", 00:22:30.744 "trsvcid": "4420", 00:22:30.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.744 "hdgst": false, 00:22:30.744 "ddgst": false 00:22:30.744 }, 00:22:30.744 "method": "bdev_nvme_attach_controller" 00:22:30.744 }' 00:22:30.744 [2024-06-10 13:50:45.121962] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:22:30.744 [2024-06-10 13:50:45.122025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403507 ] 00:22:30.744 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.002 [2024-06-10 13:50:45.244365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.002 [2024-06-10 13:50:45.325803] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.260 Running I/O for 10 seconds... 00:22:41.233 00:22:41.233 Latency(us) 00:22:41.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.233 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:41.233 Verification LBA range: start 0x0 length 0x1000 00:22:41.233 Nvme1n1 : 10.02 6400.48 50.00 0.00 0.00 19937.68 3381.66 35651.58 00:22:41.233 =================================================================================================================== 00:22:41.233 Total : 6400.48 50.00 0.00 0.00 19937.68 3381.66 35651.58 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1405337 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.494 { 00:22:41.494 "params": { 00:22:41.494 "name": "Nvme$subsystem", 00:22:41.494 "trtype": "$TEST_TRANSPORT", 00:22:41.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.494 "adrfam": "ipv4", 00:22:41.494 "trsvcid": "$NVMF_PORT", 00:22:41.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.494 "hdgst": ${hdgst:-false}, 00:22:41.494 "ddgst": ${ddgst:-false} 00:22:41.494 }, 00:22:41.494 "method": "bdev_nvme_attach_controller" 00:22:41.494 } 00:22:41.494 EOF 00:22:41.494 )") 00:22:41.494 [2024-06-10 13:50:55.733656] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.733697] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:22:41.494 [2024-06-10 13:50:55.745644] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.745663] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 13:50:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:41.494 "params": { 00:22:41.494 "name": "Nvme1", 00:22:41.494 "trtype": "tcp", 00:22:41.494 "traddr": "10.0.0.2", 00:22:41.494 "adrfam": "ipv4", 00:22:41.494 "trsvcid": "4420", 00:22:41.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.494 "hdgst": false, 00:22:41.494 "ddgst": false 00:22:41.494 }, 00:22:41.494 "method": "bdev_nvme_attach_controller" 00:22:41.494 }' 00:22:41.494 [2024-06-10 13:50:55.757670] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.757687] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.769701] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.769717] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.778973] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:22:41.494 [2024-06-10 13:50:55.779034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405337 ] 00:22:41.494 [2024-06-10 13:50:55.781735] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.781750] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.793766] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.793781] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.805798] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.805813] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.817832] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.817848] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.829865] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.829880] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.494 [2024-06-10 13:50:55.841898] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.841913] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.853931] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.853946] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.865962] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.865977] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.877998] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.878013] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.890031] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.890046] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.898584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.494 [2024-06-10 13:50:55.902065] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.902080] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.914100] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.914117] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.926130] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.926145] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.938164] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.938179] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.950203] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.950229] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.494 [2024-06-10 13:50:55.962234] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.494 [2024-06-10 13:50:55.962249] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:55.974268] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:55.974282] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:55.982078] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.755 [2024-06-10 13:50:55.986300] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:55.986316] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:55.998347] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:55.998370] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.010377] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.010402] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.022407] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.022424] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.034437] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.034453] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.046475] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.046492] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.058506] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.058521] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.070537] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.070553] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.082611] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.082640] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.094622] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.094642] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.106847] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.106871] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.118674] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.118689] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.130708] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.130723] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.142743] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.142762] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.154776] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.154795] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.166809] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.166828] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.178854] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.178877] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 Running I/O for 5 seconds... 00:22:41.755 [2024-06-10 13:50:56.190876] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.190892] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.207083] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.207110] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.755 [2024-06-10 13:50:56.222974] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.755 [2024-06-10 13:50:56.222999] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.239593] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.239618] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.256118] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.256148] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.272556] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.272588] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.289188] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.289214] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.305616] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.305641] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.321777] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.321801] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.339545] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.339570] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.355596] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.355621] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.372015] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.372039] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.389454] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.389480] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.406911] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.406936] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.422227] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.422253] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.439738] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.439763] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.455570] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.455601] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.015 [2024-06-10 13:50:56.473358] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.015 [2024-06-10 13:50:56.473383] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.274 [2024-06-10 13:50:56.488955] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.488980] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.506873] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.506898] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.521145] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.521170] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.537841] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.537867] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.554011] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.554035] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.572041] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.572070] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.587429] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.587454] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.596917] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.596941] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.611505] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.611531] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.628155] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.628180] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.644329] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.644354] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.661779] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.661803] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.677984] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.678008] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.695391] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.695416] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.711517] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.711542] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.275 [2024-06-10 13:50:56.727780] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.275 [2024-06-10 13:50:56.727805] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.746751] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.746778] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.760847] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.760872] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.777524] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.777549] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.793782] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.793807] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.810850] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.810877] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.829010] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.829036] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.843743] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.843768] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.859797] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.859822] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.877377] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.877407] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.892303] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.892328] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.908426] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.908451] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.927574] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.534 [2024-06-10 13:50:56.927607] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.534 [2024-06-10 13:50:56.941966] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.535 [2024-06-10 13:50:56.941991] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.535 [2024-06-10 13:50:56.959362] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.535 [2024-06-10 13:50:56.959388] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.535 [2024-06-10 13:50:56.975215] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.535 [2024-06-10 13:50:56.975240] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.535 [2024-06-10 13:50:56.991951] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.535 [2024-06-10 13:50:56.991976] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.008306] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.008331] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.026400] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.026425] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.041718] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.041743] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.059523] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.059548] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.074788] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.074813] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.091915] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.091940] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.106041] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.106067] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.122779] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.122804] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.138023] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.138047] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.155733] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.155759] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.171286] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.171311] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.182899] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.182924] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.199506] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.199531] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.215634] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.215659] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.232843] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.232869] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.794 [2024-06-10 13:50:57.249694] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.794 [2024-06-10 13:50:57.249718] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.268224] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.268249] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.283339] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.283365] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.295036] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.295061] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.311694] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.311719] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.327300] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.327325] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.345341] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.345367] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.053 [2024-06-10 13:50:57.359301] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.053 [2024-06-10 13:50:57.359325] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.374587] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.374612] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.386430] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.386454] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.403160] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.403184] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.420119] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.420144] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.438730] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.438755] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.452935] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.452960] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.469959] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.469984] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.484646] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.484671] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.496273] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.496297] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.054 [2024-06-10 13:50:57.513776] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.054 [2024-06-10 13:50:57.513800] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.529290] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.529314] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.547925] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.547951] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.562163] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.562188] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.579284] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.579308] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.594702] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.594728] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.606021] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.606045] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.623660] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.623685] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.637790] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.637815] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.656028] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.656052] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.670610] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.670634] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.688717] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.688743] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.704326] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.704351] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.714866] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.714890] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.732217] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.732241] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.747924] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.747949] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.759852] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.759878] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.313 [2024-06-10 13:50:57.776379] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.313 [2024-06-10 13:50:57.776405] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.572 [2024-06-10 13:50:57.792809] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.572 [2024-06-10 13:50:57.792834] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.572 [2024-06-10 13:50:57.810779] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.572 [2024-06-10 13:50:57.810805] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.572 [2024-06-10 13:50:57.824843] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.572 [2024-06-10 13:50:57.824868] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.572 [2024-06-10 13:50:57.840886] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.572 [2024-06-10 13:50:57.840917] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.857735] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.857760] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.873340] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.873364] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.889246] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.889271] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.907071] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.907095] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.922994] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.923018] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.940559] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.940591] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.955817] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.955842] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.964968] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.964992] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.980344] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.980369] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:57.991703] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:57.991728] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:58.008818] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:58.008844] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:58.025234] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:58.025259] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.573 [2024-06-10 13:50:58.042914] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.573 [2024-06-10 13:50:58.042940] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.058412] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.058443] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.076057] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.076082] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.089913] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.089938] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.107734] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.107759] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.123140] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.123165] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.142137] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.142162] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.158911] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.158937] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.175477] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.175503] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.191748] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.191773] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.209167] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.209192] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.225375] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.225400] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.241283] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.241308] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.253362] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.253387] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.270418] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.270442] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.285958] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.285983] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.832 [2024-06-10 13:50:58.297299] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.832 [2024-06-10 13:50:58.297323] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.315519] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.315544] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.329678] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.329703] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.346879] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.346904] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.362138] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.362168] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.373782] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.373808] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.391502] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.391528] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.406962] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.406989] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.418546] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.418572] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.434895] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.434922] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.450822] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.450848] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.462269] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.462294] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.478687] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.478713] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.495675] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.495699] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.512613] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.512639] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.528692] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.528716] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.545858] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.545883] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.092 [2024-06-10 13:50:58.561751] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.092 [2024-06-10 13:50:58.561776] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.579080] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.579104] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.595212] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.595237] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.613063] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.613088] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.628272] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.628297] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.639878] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.639903] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.656537] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.656567] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.671885] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.671910] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.688082] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.688108] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.705799] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.705825] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.722867] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.722892] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.739483] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.739510] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.755819] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.755844] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.772080] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.772105] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.790830] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.790856] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.805004] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.805029] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.351 [2024-06-10 13:50:58.821026] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.351 [2024-06-10 13:50:58.821051] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.838614] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.838639] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.854608] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.854633] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.872801] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.872827] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.888169] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.888194] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.897913] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.897937] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.912129] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.912154] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.928725] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.928750] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.945123] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.945148] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.962619] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.962649] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.978360] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.978384] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:58.995944] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:58.995968] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:59.011194] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:59.011219] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:59.027697] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:59.027721] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:59.043828] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:59.043852] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:59.061103] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:59.061128] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.610 [2024-06-10 13:50:59.077892] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.610 [2024-06-10 13:50:59.077917] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.095371] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.095396] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.112856] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.112880] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.127871] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.127895] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.145653] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.145677] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.162230] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.162254] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.178356] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.178380] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.196659] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.196684] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.213408] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.213432] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.229553] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.229583] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.246407] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.246431] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.262483] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.262508] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.273642] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.273671] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.290359] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.290384] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.306634] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.306658] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.869 [2024-06-10 13:50:59.325093] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.869 [2024-06-10 13:50:59.325118] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.341280] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.341304] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.359549] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.359574] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.375148] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.375173] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.386724] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.386749] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.403619] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.403643] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.417935] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.417960] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.433518] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.433543] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.442375] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.442399] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.457282] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.457306] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.473388] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.473413] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.489312] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.489337] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.501016] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.501042] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.518149] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.518174] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.534824] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.534849] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.553119] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.553144] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.567465] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.567490] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.576537] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.576562] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.128 [2024-06-10 13:50:59.590536] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.128 [2024-06-10 13:50:59.590560] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.386 [2024-06-10 13:50:59.608209] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.608233] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.623290] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.623315] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.632629] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.632653] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.647880] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.647904] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.659146] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.659169] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.676298] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.676322] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.691275] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.691300] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.707022] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.707047] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.724259] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.724284] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.742128] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.742151] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.757899] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.757923] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.774989] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.775014] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.789642] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.789667] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.805594] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.805619] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.822179] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.822204] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.840475] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.840500] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.387 [2024-06-10 13:50:59.854866] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.387 [2024-06-10 13:50:59.854892] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.870730] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.870754] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.889673] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.889698] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.904007] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.904034] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.913152] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.913175] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.928238] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.928264] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.945691] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.945716] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.961056] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.645 [2024-06-10 13:50:59.961082] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.645 [2024-06-10 13:50:59.970333] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:50:59.970357] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:50:59.984633] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:50:59.984660] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.001819] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.001845] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.017573] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.017606] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.036685] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.036715] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.050918] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.050945] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.067302] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.067328] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.084081] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.084107] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.100614] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.100639] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.646 [2024-06-10 13:51:00.116587] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.646 [2024-06-10 13:51:00.116613] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.128310] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.128336] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.144373] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.144398] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.162749] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.162775] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.177921] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.177946] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.196983] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.197008] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.211225] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.211251] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.228550] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.228581] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.244359] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.244385] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.263082] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.263108] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.277591] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.277617] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.293786] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.293811] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.311029] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.311053] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.326926] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.326952] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.338430] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.338455] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.355747] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.355772] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.905 [2024-06-10 13:51:00.371628] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.905 [2024-06-10 13:51:00.371654] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.163 [2024-06-10 13:51:00.389287] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.163 [2024-06-10 13:51:00.389311] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.163 [2024-06-10 13:51:00.405658] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.163 [2024-06-10 13:51:00.405683] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.163 [2024-06-10 13:51:00.422391] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.163 [2024-06-10 13:51:00.422416] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.163 [2024-06-10 13:51:00.439103] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.163 [2024-06-10 13:51:00.439133] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.163 [2024-06-10 13:51:00.456043] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.163 [2024-06-10 13:51:00.456069] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.472112] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.472137] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.489212] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.489238] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.506797] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.506821] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.522272] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.522298] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.533853] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.533878] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.551193] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.551218] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.565299] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.565324] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.581834] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.581859] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.599125] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.599150] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.614671] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.614697] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.164 [2024-06-10 13:51:00.626233] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.164 [2024-06-10 13:51:00.626258] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.642198] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.642223] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.659701] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.659726] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.674436] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.674461] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.685568] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.685598] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.702827] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.702851] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.718136] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.718161] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.736062] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.736092] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.750558] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.750589] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.768151] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.768176] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.422 [2024-06-10 13:51:00.783533] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.422 [2024-06-10 13:51:00.783558] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.423 [2024-06-10 13:51:00.794745] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.423 [2024-06-10 13:51:00.794770] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.423 [2024-06-10 13:51:00.812099] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.423 [2024-06-10 13:51:00.812124] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.423 [2024-06-10 13:51:00.827411] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.423 [2024-06-10 13:51:00.827437] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.423 [2024-06-10 13:51:00.845211] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.423 [2024-06-10 13:51:00.845236] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.423 [2024-06-10 13:51:00.861785] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.423 [2024-06-10 13:51:00.861809] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.423 [2024-06-10 13:51:00.879904] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.423 [2024-06-10 13:51:00.879930] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.681 [2024-06-10 13:51:00.894673] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.681 [2024-06-10 13:51:00.894698] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.681 [2024-06-10 13:51:00.906582] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.681 [2024-06-10 13:51:00.906607] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.681 [2024-06-10 13:51:00.924022] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.681 [2024-06-10 13:51:00.924053] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.681 [2024-06-10 13:51:00.940682] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.681 [2024-06-10 13:51:00.940708] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.681 [2024-06-10 13:51:00.957277] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.681 [2024-06-10 13:51:00.957302] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.681 [2024-06-10 13:51:00.974718] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.681 [2024-06-10 13:51:00.974743] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:00.989997] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:00.990023] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.006464] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.006489] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.023433] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.023459] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.040523] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.040554] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.057532] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.057557] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.073965] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.073989] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.091303] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.091327] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.108798] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.108823] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.124279] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.124304] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.135793] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.135818] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.682 [2024-06-10 13:51:01.152517] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.682 [2024-06-10 13:51:01.152542] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.940 [2024-06-10 13:51:01.168722] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.940 [2024-06-10 13:51:01.168746] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.940 [2024-06-10 13:51:01.186338] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.940 [2024-06-10 13:51:01.186370] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.940 [2024-06-10 13:51:01.200518] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.940 [2024-06-10 13:51:01.200543] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.940 00:22:46.940 Latency(us) 00:22:46.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.940 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:46.940 Nvme1n1 : 5.01 12572.72 98.22 0.00 0.00 10170.84 4561.31 19188.94 00:22:46.941 =================================================================================================================== 00:22:46.941 Total : 12572.72 98.22 0.00 0.00 10170.84 4561.31 19188.94 00:22:46.941 [2024-06-10 13:51:01.211702] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.211726] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.223730] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.223751] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.235765] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.235782] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.247800] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.247821] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.259831] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.259849] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.271859] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.271884] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.283893] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.283911] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.295924] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.295944] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.307957] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.307975] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.319986] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.320002] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.332020] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.332036] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.344056] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.344073] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.356088] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.356103] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.368121] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.368137] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.380158] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.380176] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.392189] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.392204] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.941 [2024-06-10 13:51:01.404219] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.941 [2024-06-10 13:51:01.404234] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1405337) - No such process 00:22:47.199 13:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1405337 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 delay0 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.200 13:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:47.200 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.200 [2024-06-10 13:51:01.517170] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:53.764 Initializing NVMe Controllers 00:22:53.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:53.764 Initialization complete. Launching workers. 00:22:53.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 134 00:22:53.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 413, failed to submit 41 00:22:53.764 success 238, unsuccess 175, failed 0 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.764 rmmod nvme_tcp 00:22:53.764 rmmod nvme_fabrics 00:22:53.764 rmmod nvme_keyring 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1403230 ']' 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1403230 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 1403230 ']' 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 1403230 00:22:53.764 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1403230 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1403230' 00:22:53.765 killing process with pid 1403230 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 1403230 00:22:53.765 13:51:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 1403230 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.765 13:51:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.304 13:51:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.304 00:22:56.304 real 0m34.922s 00:22:56.304 user 0m43.385s 00:22:56.304 sys 0m14.089s 00:22:56.304 13:51:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:56.304 13:51:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:56.304 ************************************ 00:22:56.304 END TEST nvmf_zcopy 00:22:56.304 ************************************ 00:22:56.304 13:51:10 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:56.304 13:51:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:56.304 13:51:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.304 13:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.304 ************************************ 00:22:56.304 START TEST nvmf_nmic 00:22:56.304 ************************************ 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:56.304 * Looking for test storage... 00:22:56.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.304 13:51:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.305 13:51:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.431 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:04.432 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:04.432 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:04.432 Found net devices under 0000:af:00.0: cvl_0_0 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:04.432 Found net devices under 0000:af:00.1: cvl_0_1 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.432 13:51:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:23:04.692 00:23:04.692 --- 10.0.0.2 ping statistics --- 00:23:04.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.692 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:04.692 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:23:04.952 00:23:04.952 --- 10.0.0.1 ping statistics --- 00:23:04.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.952 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1412284 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1412284 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 1412284 ']' 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:04.952 13:51:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:04.952 [2024-06-10 13:51:19.262533] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:23:04.952 [2024-06-10 13:51:19.262606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.952 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.952 [2024-06-10 13:51:19.378603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.211 [2024-06-10 13:51:19.467492] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.211 [2024-06-10 13:51:19.467534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.211 [2024-06-10 13:51:19.467547] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.211 [2024-06-10 13:51:19.467559] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.212 [2024-06-10 13:51:19.467569] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.212 [2024-06-10 13:51:19.467708] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.212 [2024-06-10 13:51:19.467730] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.212 [2024-06-10 13:51:19.467843] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.212 [2024-06-10 13:51:19.467843] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:05.814 [2024-06-10 13:51:20.228960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:05.814 Malloc0 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.814 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:06.095 [2024-06-10 13:51:20.284822] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:06.095 test case1: single bdev can't be used in multiple subsystems 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:06.095 [2024-06-10 13:51:20.308677] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:06.095 [2024-06-10 13:51:20.308705] subsystem.c:2068:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:06.095 [2024-06-10 13:51:20.308719] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:06.095 request: 00:23:06.095 { 00:23:06.095 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:06.095 "namespace": { 00:23:06.095 "bdev_name": "Malloc0", 00:23:06.095 "no_auto_visible": false 00:23:06.095 }, 00:23:06.095 "method": "nvmf_subsystem_add_ns", 00:23:06.095 "req_id": 1 00:23:06.095 } 00:23:06.095 Got JSON-RPC error response 00:23:06.095 response: 00:23:06.095 { 00:23:06.095 "code": -32602, 00:23:06.095 "message": "Invalid parameters" 00:23:06.095 } 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:06.095 Adding namespace failed - expected result. 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:06.095 test case2: host connect to nvmf target in multiple paths 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:06.095 [2024-06-10 13:51:20.324862] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.095 13:51:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:07.472 13:51:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:08.847 13:51:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:08.848 13:51:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:23:08.848 13:51:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:08.848 13:51:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:08.848 13:51:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:23:10.752 13:51:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:10.752 [global] 00:23:10.752 thread=1 00:23:10.752 invalidate=1 00:23:10.752 rw=write 00:23:10.752 time_based=1 00:23:10.752 runtime=1 00:23:10.752 ioengine=libaio 00:23:10.752 direct=1 00:23:10.752 bs=4096 00:23:10.752 iodepth=1 00:23:10.752 norandommap=0 00:23:10.752 numjobs=1 00:23:10.752 00:23:10.752 verify_dump=1 00:23:10.752 verify_backlog=512 00:23:10.752 verify_state_save=0 00:23:10.752 do_verify=1 00:23:10.752 verify=crc32c-intel 00:23:10.752 [job0] 00:23:10.752 filename=/dev/nvme0n1 00:23:10.752 Could not set queue depth (nvme0n1) 00:23:11.010 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:11.010 fio-3.35 00:23:11.010 Starting 1 thread 00:23:12.388 00:23:12.388 job0: (groupid=0, jobs=1): err= 0: pid=1413469: Mon Jun 10 13:51:26 2024 00:23:12.388 read: IOPS=1198, BW=4792KiB/s (4907kB/s)(4792KiB/1000msec) 00:23:12.388 slat (nsec): min=8747, max=40526, avg=9565.46, stdev=1540.63 00:23:12.388 clat (usec): min=329, max=561, avg=448.30, stdev=40.23 00:23:12.388 lat (usec): min=338, max=571, avg=457.87, stdev=40.24 00:23:12.388 clat percentiles (usec): 00:23:12.388 | 1.00th=[ 371], 5.00th=[ 396], 10.00th=[ 412], 20.00th=[ 420], 00:23:12.388 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:23:12.388 | 70.00th=[ 453], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 529], 00:23:12.388 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 562], 99.95th=[ 562], 00:23:12.388 | 99.99th=[ 562] 00:23:12.388 write: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec); 0 zone resets 00:23:12.388 slat (usec): min=12, max=28364, avg=31.96, stdev=723.39 00:23:12.388 clat (usec): min=202, max=382, avg=256.93, stdev=17.63 00:23:12.388 lat (usec): min=214, max=28746, avg=288.89, stdev=726.81 00:23:12.388 clat percentiles (usec): 00:23:12.388 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 247], 00:23:12.388 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 262], 00:23:12.388 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:23:12.388 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 355], 99.95th=[ 383], 00:23:12.388 | 99.99th=[ 383] 00:23:12.388 bw ( KiB/s): min= 6472, max= 6472, per=100.00%, avg=6472.00, stdev= 0.00, samples=1 00:23:12.388 iops : min= 1618, max= 1618, avg=1618.00, stdev= 0.00, samples=1 00:23:12.388 lat (usec) : 250=13.72%, 500=80.72%, 750=5.56% 00:23:12.388 cpu : usr=2.80%, sys=4.60%, ctx=2737, majf=0, minf=2 00:23:12.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:12.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.388 issued rwts: total=1198,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:12.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:12.388 00:23:12.388 Run status group 0 (all jobs): 00:23:12.388 READ: bw=4792KiB/s (4907kB/s), 4792KiB/s-4792KiB/s (4907kB/s-4907kB/s), io=4792KiB (4907kB), run=1000-1000msec 00:23:12.388 WRITE: bw=6144KiB/s (6291kB/s), 6144KiB/s-6144KiB/s (6291kB/s-6291kB/s), io=6144KiB (6291kB), run=1000-1000msec 00:23:12.388 00:23:12.388 Disk stats (read/write): 00:23:12.388 nvme0n1: ios=1050/1442, merge=0/0, ticks=1447/360, in_queue=1807, util=98.90% 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:12.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.388 rmmod nvme_tcp 00:23:12.388 rmmod nvme_fabrics 00:23:12.388 rmmod nvme_keyring 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1412284 ']' 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1412284 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 1412284 ']' 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 1412284 00:23:12.388 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:23:12.647 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:12.647 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1412284 00:23:12.648 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:12.648 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:12.648 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1412284' 00:23:12.648 killing process with pid 1412284 00:23:12.648 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 1412284 00:23:12.648 13:51:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 1412284 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.907 13:51:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.811 13:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.811 00:23:14.811 real 0m18.984s 00:23:14.811 user 0m44.540s 00:23:14.811 sys 0m7.885s 00:23:14.811 13:51:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:14.811 13:51:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:14.811 ************************************ 00:23:14.811 END TEST nvmf_nmic 00:23:14.811 ************************************ 00:23:14.811 13:51:29 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:14.811 13:51:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:14.811 13:51:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:14.811 13:51:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.070 ************************************ 00:23:15.070 START TEST nvmf_fio_target 00:23:15.070 ************************************ 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:15.070 * Looking for test storage... 00:23:15.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.070 13:51:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:25.049 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:25.049 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:25.049 Found net devices under 0000:af:00.0: cvl_0_0 00:23:25.049 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:25.050 Found net devices under 0000:af:00.1: cvl_0_1 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.050 13:51:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:23:25.050 00:23:25.050 --- 10.0.0.2 ping statistics --- 00:23:25.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.050 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:23:25.050 00:23:25.050 --- 10.0.0.1 ping statistics --- 00:23:25.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.050 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1418177 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1418177 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 1418177 ']' 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:25.050 13:51:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.050 [2024-06-10 13:51:38.324007] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:23:25.050 [2024-06-10 13:51:38.324071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.050 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.050 [2024-06-10 13:51:38.452426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.050 [2024-06-10 13:51:38.538054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.050 [2024-06-10 13:51:38.538099] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.050 [2024-06-10 13:51:38.538112] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.050 [2024-06-10 13:51:38.538124] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.050 [2024-06-10 13:51:38.538134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.050 [2024-06-10 13:51:38.538194] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.050 [2024-06-10 13:51:38.538289] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.050 [2024-06-10 13:51:38.538403] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.050 [2024-06-10 13:51:38.538403] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.050 13:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:25.050 [2024-06-10 13:51:39.495622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.308 13:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:25.308 13:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:25.567 13:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:25.567 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:25.567 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:25.825 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:25.825 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:26.084 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:26.084 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:26.344 13:51:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:26.602 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:26.602 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:26.861 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:26.861 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:27.119 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:27.119 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:27.378 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:27.636 13:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:27.636 13:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.895 13:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:27.895 13:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.154 13:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.413 [2024-06-10 13:51:42.681598] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.414 13:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:28.672 13:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:28.930 13:51:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:30.307 13:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:30.307 13:51:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:23:30.307 13:51:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:30.307 13:51:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:23:30.307 13:51:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:23:30.307 13:51:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:23:32.211 13:51:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:32.211 [global] 00:23:32.211 thread=1 00:23:32.211 invalidate=1 00:23:32.211 rw=write 00:23:32.211 time_based=1 00:23:32.211 runtime=1 00:23:32.211 ioengine=libaio 00:23:32.211 direct=1 00:23:32.211 bs=4096 00:23:32.211 iodepth=1 00:23:32.211 norandommap=0 00:23:32.211 numjobs=1 00:23:32.211 00:23:32.211 verify_dump=1 00:23:32.211 verify_backlog=512 00:23:32.211 verify_state_save=0 00:23:32.211 do_verify=1 00:23:32.211 verify=crc32c-intel 00:23:32.211 [job0] 00:23:32.211 filename=/dev/nvme0n1 00:23:32.211 [job1] 00:23:32.211 filename=/dev/nvme0n2 00:23:32.211 [job2] 00:23:32.211 filename=/dev/nvme0n3 00:23:32.211 [job3] 00:23:32.211 filename=/dev/nvme0n4 00:23:32.485 Could not set queue depth (nvme0n1) 00:23:32.485 Could not set queue depth (nvme0n2) 00:23:32.485 Could not set queue depth (nvme0n3) 00:23:32.485 Could not set queue depth (nvme0n4) 00:23:32.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.745 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.745 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.745 fio-3.35 00:23:32.745 Starting 4 threads 00:23:34.119 00:23:34.119 job0: (groupid=0, jobs=1): err= 0: pid=1419977: Mon Jun 10 13:51:48 2024 00:23:34.119 read: IOPS=260, BW=1041KiB/s (1066kB/s)(1076KiB/1034msec) 00:23:34.119 slat (nsec): min=9038, max=28966, avg=10838.91, stdev=4132.21 00:23:34.119 clat (usec): min=330, max=42301, avg=3356.76, stdev=10508.05 00:23:34.119 lat (usec): min=339, max=42311, avg=3367.60, stdev=10510.45 00:23:34.119 clat percentiles (usec): 00:23:34.119 | 1.00th=[ 343], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 404], 00:23:34.119 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 510], 00:23:34.119 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 603], 95.00th=[41157], 00:23:34.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:34.119 | 99.99th=[42206] 00:23:34.119 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:23:34.119 slat (nsec): min=12604, max=49689, avg=13772.83, stdev=2094.66 00:23:34.119 clat (usec): min=184, max=558, avg=229.80, stdev=42.80 00:23:34.119 lat (usec): min=197, max=608, avg=243.57, stdev=43.50 00:23:34.119 clat percentiles (usec): 00:23:34.119 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:23:34.119 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:23:34.119 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[ 343], 00:23:34.119 | 99.00th=[ 359], 99.50th=[ 429], 99.90th=[ 562], 99.95th=[ 562], 00:23:34.119 | 99.99th=[ 562] 00:23:34.119 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:23:34.119 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:34.119 lat (usec) : 250=56.85%, 500=28.04%, 750=12.68% 00:23:34.119 lat (msec) : 50=2.43% 00:23:34.119 cpu : usr=0.48%, sys=1.65%, ctx=782, majf=0, minf=1 00:23:34.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:34.120 job1: (groupid=0, jobs=1): err= 0: pid=1419978: Mon Jun 10 13:51:48 2024 00:23:34.120 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:23:34.120 slat (nsec): min=11419, max=25915, avg=24202.24, stdev=3325.45 00:23:34.120 clat (usec): min=40812, max=42306, avg=41551.04, stdev=539.32 00:23:34.120 lat (usec): min=40837, max=42318, avg=41575.25, stdev=538.11 00:23:34.120 clat percentiles (usec): 00:23:34.120 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:23:34.120 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:23:34.120 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:34.120 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:34.120 | 99.99th=[42206] 00:23:34.120 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:23:34.120 slat (nsec): min=12218, max=40812, avg=13284.70, stdev=2094.12 00:23:34.120 clat (usec): min=223, max=515, avg=263.74, stdev=21.55 00:23:34.120 lat (usec): min=236, max=556, avg=277.02, stdev=22.25 00:23:34.120 clat percentiles (usec): 00:23:34.120 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:23:34.120 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:23:34.120 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:23:34.120 | 99.00th=[ 314], 99.50th=[ 355], 99.90th=[ 515], 99.95th=[ 515], 00:23:34.120 | 99.99th=[ 515] 00:23:34.120 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:23:34.120 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:34.120 lat (usec) : 250=23.83%, 500=72.05%, 750=0.19% 00:23:34.120 lat (msec) : 50=3.94% 00:23:34.120 cpu : usr=0.10%, sys=0.89%, ctx=534, majf=0, minf=1 00:23:34.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:34.120 job2: (groupid=0, jobs=1): err= 0: pid=1419979: Mon Jun 10 13:51:48 2024 00:23:34.120 read: IOPS=1155, BW=4623KiB/s (4734kB/s)(4628KiB/1001msec) 00:23:34.120 slat (nsec): min=8693, max=34471, avg=9592.55, stdev=1760.24 00:23:34.120 clat (usec): min=420, max=637, avg=499.80, stdev=26.14 00:23:34.120 lat (usec): min=429, max=650, avg=509.40, stdev=26.45 00:23:34.120 clat percentiles (usec): 00:23:34.120 | 1.00th=[ 445], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 482], 00:23:34.120 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 502], 00:23:34.120 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 545], 00:23:34.120 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 635], 99.95th=[ 635], 00:23:34.120 | 99.99th=[ 635] 00:23:34.120 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:34.120 slat (nsec): min=11536, max=42363, avg=12407.20, stdev=1567.02 00:23:34.120 clat (usec): min=196, max=557, avg=251.31, stdev=31.21 00:23:34.120 lat (usec): min=208, max=600, avg=263.72, stdev=31.40 00:23:34.120 clat percentiles (usec): 00:23:34.120 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:23:34.120 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:23:34.120 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:23:34.120 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 375], 99.95th=[ 562], 00:23:34.120 | 99.99th=[ 562] 00:23:34.120 bw ( KiB/s): min= 6632, max= 6632, per=55.86%, avg=6632.00, stdev= 0.00, samples=1 00:23:34.120 iops : min= 1658, max= 1658, avg=1658.00, stdev= 0.00, samples=1 00:23:34.120 lat (usec) : 250=31.97%, 500=49.54%, 750=18.49% 00:23:34.120 cpu : usr=1.20%, sys=3.60%, ctx=2693, majf=0, minf=1 00:23:34.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 issued rwts: total=1157,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:34.120 job3: (groupid=0, jobs=1): err= 0: pid=1419980: Mon Jun 10 13:51:48 2024 00:23:34.120 read: IOPS=267, BW=1071KiB/s (1096kB/s)(1108KiB/1035msec) 00:23:34.120 slat (nsec): min=9098, max=29280, avg=10933.42, stdev=3859.67 00:23:34.120 clat (usec): min=300, max=42340, avg=3270.17, stdev=10367.89 00:23:34.120 lat (usec): min=310, max=42350, avg=3281.10, stdev=10370.26 00:23:34.120 clat percentiles (usec): 00:23:34.120 | 1.00th=[ 318], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 400], 00:23:34.120 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 457], 60.00th=[ 510], 00:23:34.120 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 603], 95.00th=[41157], 00:23:34.120 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:34.120 | 99.99th=[42206] 00:23:34.120 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:23:34.120 slat (nsec): min=12786, max=40826, avg=14194.83, stdev=2505.30 00:23:34.120 clat (usec): min=178, max=640, avg=227.37, stdev=42.70 00:23:34.120 lat (usec): min=192, max=681, avg=241.57, stdev=43.43 00:23:34.120 clat percentiles (usec): 00:23:34.120 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:23:34.120 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:23:34.120 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 347], 00:23:34.120 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 644], 99.95th=[ 644], 00:23:34.120 | 99.99th=[ 644] 00:23:34.120 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:23:34.120 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:34.120 lat (usec) : 250=57.92%, 500=26.74%, 750=12.93% 00:23:34.120 lat (msec) : 50=2.41% 00:23:34.120 cpu : usr=0.97%, sys=1.16%, ctx=790, majf=0, minf=2 00:23:34.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.120 issued rwts: total=277,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:34.120 00:23:34.120 Run status group 0 (all jobs): 00:23:34.120 READ: bw=6663KiB/s (6823kB/s), 82.6KiB/s-4623KiB/s (84.6kB/s-4734kB/s), io=6896KiB (7062kB), run=1001-1035msec 00:23:34.120 WRITE: bw=11.6MiB/s (12.2MB/s), 1979KiB/s-6138KiB/s (2026kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1035msec 00:23:34.120 00:23:34.120 Disk stats (read/write): 00:23:34.120 nvme0n1: ios=289/512, merge=0/0, ticks=1639/111, in_queue=1750, util=96.39% 00:23:34.120 nvme0n2: ios=39/512, merge=0/0, ticks=1610/132, in_queue=1742, util=97.00% 00:23:34.120 nvme0n3: ios=998/1024, merge=0/0, ticks=500/254, in_queue=754, util=87.17% 00:23:34.120 nvme0n4: ios=292/512, merge=0/0, ticks=1524/105, in_queue=1629, util=96.98% 00:23:34.120 13:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:34.120 [global] 00:23:34.120 thread=1 00:23:34.120 invalidate=1 00:23:34.120 rw=randwrite 00:23:34.120 time_based=1 00:23:34.120 runtime=1 00:23:34.120 ioengine=libaio 00:23:34.120 direct=1 00:23:34.120 bs=4096 00:23:34.120 iodepth=1 00:23:34.120 norandommap=0 00:23:34.120 numjobs=1 00:23:34.120 00:23:34.120 verify_dump=1 00:23:34.120 verify_backlog=512 00:23:34.120 verify_state_save=0 00:23:34.120 do_verify=1 00:23:34.120 verify=crc32c-intel 00:23:34.120 [job0] 00:23:34.120 filename=/dev/nvme0n1 00:23:34.120 [job1] 00:23:34.120 filename=/dev/nvme0n2 00:23:34.120 [job2] 00:23:34.120 filename=/dev/nvme0n3 00:23:34.120 [job3] 00:23:34.120 filename=/dev/nvme0n4 00:23:34.120 Could not set queue depth (nvme0n1) 00:23:34.120 Could not set queue depth (nvme0n2) 00:23:34.120 Could not set queue depth (nvme0n3) 00:23:34.120 Could not set queue depth (nvme0n4) 00:23:34.378 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.378 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.378 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.378 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:34.378 fio-3.35 00:23:34.378 Starting 4 threads 00:23:35.779 00:23:35.779 job0: (groupid=0, jobs=1): err= 0: pid=1420403: Mon Jun 10 13:51:50 2024 00:23:35.779 read: IOPS=317, BW=1270KiB/s (1300kB/s)(1308KiB/1030msec) 00:23:35.779 slat (nsec): min=5152, max=85455, avg=10578.48, stdev=5138.85 00:23:35.779 clat (usec): min=326, max=41942, avg=2720.06, stdev=9257.63 00:23:35.779 lat (usec): min=335, max=41957, avg=2730.64, stdev=9259.99 00:23:35.779 clat percentiles (usec): 00:23:35.779 | 1.00th=[ 375], 5.00th=[ 383], 10.00th=[ 400], 20.00th=[ 433], 00:23:35.779 | 30.00th=[ 474], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 515], 00:23:35.779 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[40633], 00:23:35.779 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:35.779 | 99.99th=[41681] 00:23:35.779 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:23:35.779 slat (nsec): min=5957, max=90905, avg=12809.96, stdev=3834.10 00:23:35.779 clat (usec): min=207, max=521, avg=248.50, stdev=38.50 00:23:35.779 lat (usec): min=219, max=601, avg=261.31, stdev=39.65 00:23:35.779 clat percentiles (usec): 00:23:35.779 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 225], 00:23:35.779 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:23:35.779 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 297], 00:23:35.779 | 99.00th=[ 449], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 523], 00:23:35.779 | 99.99th=[ 523] 00:23:35.779 bw ( KiB/s): min= 4096, max= 4096, per=25.75%, avg=4096.00, stdev= 0.00, samples=1 00:23:35.779 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:35.779 lat (usec) : 250=41.95%, 500=34.68%, 750=20.86%, 1000=0.24% 00:23:35.779 lat (msec) : 2=0.12%, 50=2.15% 00:23:35.779 cpu : usr=1.17%, sys=1.07%, ctx=841, majf=0, minf=1 00:23:35.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.779 issued rwts: total=327,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.779 job1: (groupid=0, jobs=1): err= 0: pid=1420404: Mon Jun 10 13:51:50 2024 00:23:35.779 read: IOPS=1021, BW=4086KiB/s (4184kB/s)(4172KiB/1021msec) 00:23:35.779 slat (nsec): min=8750, max=42540, avg=9653.93, stdev=1586.49 00:23:35.779 clat (usec): min=287, max=41551, avg=566.80, stdev=2522.31 00:23:35.779 lat (usec): min=297, max=41563, avg=576.46, stdev=2522.54 00:23:35.779 clat percentiles (usec): 00:23:35.779 | 1.00th=[ 314], 5.00th=[ 359], 10.00th=[ 375], 20.00th=[ 383], 00:23:35.779 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 408], 00:23:35.779 | 70.00th=[ 412], 80.00th=[ 420], 90.00th=[ 441], 95.00th=[ 494], 00:23:35.779 | 99.00th=[ 627], 99.50th=[ 1270], 99.90th=[41157], 99.95th=[41681], 00:23:35.779 | 99.99th=[41681] 00:23:35.779 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:23:35.779 slat (usec): min=12, max=206, avg=13.59, stdev= 5.42 00:23:35.779 clat (usec): min=174, max=666, avg=254.05, stdev=34.16 00:23:35.779 lat (usec): min=186, max=681, avg=267.64, stdev=34.69 00:23:35.779 clat percentiles (usec): 00:23:35.779 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 227], 00:23:35.779 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 265], 00:23:35.779 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:23:35.779 | 99.00th=[ 359], 99.50th=[ 400], 99.90th=[ 474], 99.95th=[ 668], 00:23:35.779 | 99.99th=[ 668] 00:23:35.779 bw ( KiB/s): min= 4168, max= 8120, per=38.63%, avg=6144.00, stdev=2794.49, samples=2 00:23:35.779 iops : min= 1042, max= 2030, avg=1536.00, stdev=698.62, samples=2 00:23:35.779 lat (usec) : 250=29.59%, 500=68.75%, 750=1.43% 00:23:35.779 lat (msec) : 2=0.04%, 10=0.04%, 50=0.16% 00:23:35.779 cpu : usr=3.04%, sys=3.82%, ctx=2581, majf=0, minf=2 00:23:35.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.779 issued rwts: total=1043,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.779 job2: (groupid=0, jobs=1): err= 0: pid=1420405: Mon Jun 10 13:51:50 2024 00:23:35.779 read: IOPS=1057, BW=4232KiB/s (4333kB/s)(4236KiB/1001msec) 00:23:35.779 slat (nsec): min=9178, max=29539, avg=10855.51, stdev=1522.26 00:23:35.779 clat (usec): min=373, max=2528, avg=515.88, stdev=70.89 00:23:35.779 lat (usec): min=383, max=2539, avg=526.74, stdev=71.00 00:23:35.779 clat percentiles (usec): 00:23:35.779 | 1.00th=[ 392], 5.00th=[ 416], 10.00th=[ 490], 20.00th=[ 502], 00:23:35.779 | 30.00th=[ 510], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 523], 00:23:35.779 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 545], 95.00th=[ 553], 00:23:35.779 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 758], 99.95th=[ 2540], 00:23:35.779 | 99.99th=[ 2540] 00:23:35.779 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:35.779 slat (nsec): min=12963, max=49676, avg=14828.13, stdev=1979.27 00:23:35.779 clat (usec): min=210, max=1749, avg=268.00, stdev=52.60 00:23:35.779 lat (usec): min=225, max=1767, avg=282.83, stdev=52.83 00:23:35.779 clat percentiles (usec): 00:23:35.779 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 247], 00:23:35.779 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:23:35.779 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:23:35.779 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 1303], 99.95th=[ 1745], 00:23:35.779 | 99.99th=[ 1745] 00:23:35.779 bw ( KiB/s): min= 6000, max= 6000, per=37.72%, avg=6000.00, stdev= 0.00, samples=1 00:23:35.779 iops : min= 1500, max= 1500, avg=1500.00, stdev= 0.00, samples=1 00:23:35.779 lat (usec) : 250=14.80%, 500=50.91%, 750=34.14%, 1000=0.04% 00:23:35.779 lat (msec) : 2=0.08%, 4=0.04% 00:23:35.780 cpu : usr=3.00%, sys=4.70%, ctx=2596, majf=0, minf=1 00:23:35.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.780 issued rwts: total=1059,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.780 job3: (groupid=0, jobs=1): err= 0: pid=1420406: Mon Jun 10 13:51:50 2024 00:23:35.780 read: IOPS=20, BW=82.2KiB/s (84.2kB/s)(84.0KiB/1022msec) 00:23:35.780 slat (nsec): min=10031, max=28496, avg=23134.19, stdev=4338.37 00:23:35.780 clat (usec): min=40760, max=43965, avg=41277.55, stdev=719.30 00:23:35.780 lat (usec): min=40786, max=43993, avg=41300.69, stdev=720.10 00:23:35.780 clat percentiles (usec): 00:23:35.780 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:23:35.780 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:35.780 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:23:35.780 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:23:35.780 | 99.99th=[43779] 00:23:35.780 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:23:35.780 slat (nsec): min=12182, max=44858, avg=13062.26, stdev=2126.96 00:23:35.780 clat (usec): min=239, max=440, avg=286.45, stdev=25.27 00:23:35.780 lat (usec): min=251, max=453, avg=299.52, stdev=25.74 00:23:35.780 clat percentiles (usec): 00:23:35.780 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:23:35.780 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:23:35.780 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 326], 00:23:35.780 | 99.00th=[ 375], 99.50th=[ 404], 99.90th=[ 441], 99.95th=[ 441], 00:23:35.780 | 99.99th=[ 441] 00:23:35.780 bw ( KiB/s): min= 4096, max= 4096, per=25.75%, avg=4096.00, stdev= 0.00, samples=1 00:23:35.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:35.780 lat (usec) : 250=2.25%, 500=93.81% 00:23:35.780 lat (msec) : 50=3.94% 00:23:35.780 cpu : usr=0.10%, sys=1.27%, ctx=533, majf=0, minf=1 00:23:35.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.780 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.780 00:23:35.780 Run status group 0 (all jobs): 00:23:35.780 READ: bw=9515KiB/s (9743kB/s), 82.2KiB/s-4232KiB/s (84.2kB/s-4333kB/s), io=9800KiB (10.0MB), run=1001-1030msec 00:23:35.780 WRITE: bw=15.5MiB/s (16.3MB/s), 1988KiB/s-6138KiB/s (2036kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1030msec 00:23:35.780 00:23:35.780 Disk stats (read/write): 00:23:35.780 nvme0n1: ios=372/512, merge=0/0, ticks=727/125, in_queue=852, util=85.37% 00:23:35.780 nvme0n2: ios=1073/1477, merge=0/0, ticks=1145/348, in_queue=1493, util=87.51% 00:23:35.780 nvme0n3: ios=1015/1024, merge=0/0, ticks=1407/277, in_queue=1684, util=91.60% 00:23:35.780 nvme0n4: ios=73/512, merge=0/0, ticks=760/145, in_queue=905, util=96.34% 00:23:35.780 13:51:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:35.780 [global] 00:23:35.780 thread=1 00:23:35.780 invalidate=1 00:23:35.780 rw=write 00:23:35.780 time_based=1 00:23:35.780 runtime=1 00:23:35.780 ioengine=libaio 00:23:35.780 direct=1 00:23:35.780 bs=4096 00:23:35.780 iodepth=128 00:23:35.780 norandommap=0 00:23:35.780 numjobs=1 00:23:35.780 00:23:35.780 verify_dump=1 00:23:35.780 verify_backlog=512 00:23:35.780 verify_state_save=0 00:23:35.780 do_verify=1 00:23:35.780 verify=crc32c-intel 00:23:35.780 [job0] 00:23:35.780 filename=/dev/nvme0n1 00:23:35.780 [job1] 00:23:35.780 filename=/dev/nvme0n2 00:23:35.780 [job2] 00:23:35.780 filename=/dev/nvme0n3 00:23:35.780 [job3] 00:23:35.780 filename=/dev/nvme0n4 00:23:35.780 Could not set queue depth (nvme0n1) 00:23:35.780 Could not set queue depth (nvme0n2) 00:23:35.780 Could not set queue depth (nvme0n3) 00:23:35.780 Could not set queue depth (nvme0n4) 00:23:36.038 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:36.038 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:36.038 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:36.038 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:36.038 fio-3.35 00:23:36.038 Starting 4 threads 00:23:37.471 00:23:37.471 job0: (groupid=0, jobs=1): err= 0: pid=1420828: Mon Jun 10 13:51:51 2024 00:23:37.471 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:23:37.471 slat (usec): min=2, max=20391, avg=210.50, stdev=1363.70 00:23:37.471 clat (usec): min=3138, max=67616, avg=26651.37, stdev=15541.69 00:23:37.471 lat (usec): min=3148, max=67641, avg=26861.88, stdev=15614.24 00:23:37.471 clat percentiles (usec): 00:23:37.471 | 1.00th=[ 8848], 5.00th=[12387], 10.00th=[12518], 20.00th=[14222], 00:23:37.471 | 30.00th=[15926], 40.00th=[17957], 50.00th=[20579], 60.00th=[23200], 00:23:37.471 | 70.00th=[31065], 80.00th=[41157], 90.00th=[54264], 95.00th=[56886], 00:23:37.471 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:23:37.471 | 99.99th=[67634] 00:23:37.471 write: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1002msec); 0 zone resets 00:23:37.471 slat (usec): min=4, max=11579, avg=163.77, stdev=829.07 00:23:37.471 clat (usec): min=1125, max=61740, avg=22692.68, stdev=15046.44 00:23:37.471 lat (usec): min=1138, max=61754, avg=22856.45, stdev=15127.35 00:23:37.471 clat percentiles (usec): 00:23:37.471 | 1.00th=[ 3097], 5.00th=[ 7439], 10.00th=[ 9241], 20.00th=[10945], 00:23:37.471 | 30.00th=[11731], 40.00th=[14615], 50.00th=[16450], 60.00th=[19006], 00:23:37.471 | 70.00th=[26084], 80.00th=[39584], 90.00th=[46924], 95.00th=[54789], 00:23:37.471 | 99.00th=[59507], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:23:37.471 | 99.99th=[61604] 00:23:37.471 bw ( KiB/s): min= 8208, max=12272, per=16.86%, avg=10240.00, stdev=2873.68, samples=2 00:23:37.471 iops : min= 2052, max= 3068, avg=2560.00, stdev=718.42, samples=2 00:23:37.471 lat (msec) : 2=0.06%, 4=0.82%, 10=6.40%, 20=47.52%, 50=34.99% 00:23:37.471 lat (msec) : 100=10.22% 00:23:37.471 cpu : usr=2.60%, sys=5.69%, ctx=308, majf=0, minf=1 00:23:37.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.471 issued rwts: total=2560,2579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.471 job1: (groupid=0, jobs=1): err= 0: pid=1420829: Mon Jun 10 13:51:51 2024 00:23:37.471 read: IOPS=3820, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1004msec) 00:23:37.471 slat (usec): min=2, max=18857, avg=104.04, stdev=782.33 00:23:37.471 clat (usec): min=1315, max=45923, avg=15404.28, stdev=6100.97 00:23:37.471 lat (usec): min=4077, max=45964, avg=15508.32, stdev=6164.93 00:23:37.471 clat percentiles (usec): 00:23:37.471 | 1.00th=[ 6652], 5.00th=[ 8291], 10.00th=[ 9372], 20.00th=[11469], 00:23:37.471 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13698], 60.00th=[14615], 00:23:37.471 | 70.00th=[16450], 80.00th=[18482], 90.00th=[24511], 95.00th=[30540], 00:23:37.471 | 99.00th=[36963], 99.50th=[39060], 99.90th=[40633], 99.95th=[42730], 00:23:37.471 | 99.99th=[45876] 00:23:37.471 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:23:37.471 slat (usec): min=3, max=45109, avg=121.85, stdev=1265.76 00:23:37.471 clat (usec): min=1531, max=52126, avg=13883.33, stdev=6554.96 00:23:37.471 lat (usec): min=1543, max=89883, avg=14005.19, stdev=6709.24 00:23:37.471 clat percentiles (usec): 00:23:37.471 | 1.00th=[ 2966], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 8979], 00:23:37.471 | 30.00th=[10421], 40.00th=[11994], 50.00th=[12911], 60.00th=[14484], 00:23:37.471 | 70.00th=[15401], 80.00th=[17171], 90.00th=[22152], 95.00th=[24773], 00:23:37.471 | 99.00th=[40633], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:23:37.471 | 99.99th=[52167] 00:23:37.471 bw ( KiB/s): min=15128, max=17640, per=26.98%, avg=16384.00, stdev=1776.25, samples=2 00:23:37.471 iops : min= 3782, max= 4410, avg=4096.00, stdev=444.06, samples=2 00:23:37.471 lat (msec) : 2=0.09%, 4=0.74%, 10=20.75%, 20=66.19%, 50=12.22% 00:23:37.471 lat (msec) : 100=0.01% 00:23:37.471 cpu : usr=3.99%, sys=7.38%, ctx=277, majf=0, minf=1 00:23:37.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:37.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.472 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.472 job2: (groupid=0, jobs=1): err= 0: pid=1420830: Mon Jun 10 13:51:51 2024 00:23:37.472 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:23:37.472 slat (usec): min=2, max=16843, avg=124.02, stdev=861.58 00:23:37.472 clat (usec): min=2703, max=77736, avg=16981.68, stdev=10612.88 00:23:37.472 lat (usec): min=3000, max=77744, avg=17105.69, stdev=10697.30 00:23:37.472 clat percentiles (usec): 00:23:37.472 | 1.00th=[ 6259], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[12125], 00:23:37.472 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:23:37.472 | 70.00th=[15533], 80.00th=[18482], 90.00th=[25560], 95.00th=[35914], 00:23:37.472 | 99.00th=[64750], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:23:37.472 | 99.99th=[78119] 00:23:37.472 write: IOPS=4573, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1003msec); 0 zone resets 00:23:37.472 slat (usec): min=3, max=13276, avg=95.48, stdev=637.61 00:23:37.472 clat (usec): min=2295, max=38450, avg=12397.14, stdev=4949.72 00:23:37.472 lat (usec): min=2305, max=38461, avg=12492.62, stdev=4974.68 00:23:37.472 clat percentiles (usec): 00:23:37.472 | 1.00th=[ 5866], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[ 9241], 00:23:37.472 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11600], 60.00th=[12256], 00:23:37.472 | 70.00th=[12911], 80.00th=[14091], 90.00th=[16450], 95.00th=[25035], 00:23:37.472 | 99.00th=[31851], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:23:37.472 | 99.99th=[38536] 00:23:37.472 bw ( KiB/s): min=15200, max=20480, per=29.38%, avg=17840.00, stdev=3733.52, samples=2 00:23:37.472 iops : min= 3800, max= 5120, avg=4460.00, stdev=933.38, samples=2 00:23:37.472 lat (msec) : 4=0.51%, 10=22.10%, 20=66.27%, 50=9.56%, 100=1.57% 00:23:37.472 cpu : usr=4.19%, sys=6.49%, ctx=314, majf=0, minf=1 00:23:37.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:37.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.472 issued rwts: total=4096,4587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.472 job3: (groupid=0, jobs=1): err= 0: pid=1420831: Mon Jun 10 13:51:51 2024 00:23:37.472 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:23:37.472 slat (usec): min=2, max=24731, avg=142.69, stdev=1075.20 00:23:37.472 clat (msec): min=4, max=104, avg=18.79, stdev=13.59 00:23:37.472 lat (msec): min=4, max=104, avg=18.93, stdev=13.66 00:23:37.472 clat percentiles (msec): 00:23:37.472 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:23:37.472 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 17], 00:23:37.472 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 28], 95.00th=[ 46], 00:23:37.472 | 99.00th=[ 88], 99.50th=[ 89], 99.90th=[ 105], 99.95th=[ 105], 00:23:37.472 | 99.99th=[ 105] 00:23:37.472 write: IOPS=3972, BW=15.5MiB/s (16.3MB/s)(15.5MiB/1002msec); 0 zone resets 00:23:37.472 slat (usec): min=3, max=14463, avg=114.37, stdev=722.38 00:23:37.472 clat (usec): min=1286, max=42236, avg=15014.34, stdev=5343.39 00:23:37.472 lat (usec): min=1300, max=44561, avg=15128.71, stdev=5399.49 00:23:37.472 clat percentiles (usec): 00:23:37.472 | 1.00th=[ 2999], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11731], 00:23:37.472 | 30.00th=[12780], 40.00th=[13173], 50.00th=[14615], 60.00th=[15008], 00:23:37.472 | 70.00th=[15664], 80.00th=[17171], 90.00th=[19006], 95.00th=[27657], 00:23:37.472 | 99.00th=[34866], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:23:37.472 | 99.99th=[42206] 00:23:37.472 bw ( KiB/s): min=14440, max=16384, per=25.38%, avg=15412.00, stdev=1374.62, samples=2 00:23:37.472 iops : min= 3610, max= 4096, avg=3853.00, stdev=343.65, samples=2 00:23:37.472 lat (msec) : 2=0.15%, 4=0.49%, 10=7.05%, 20=77.37%, 50=12.85% 00:23:37.472 lat (msec) : 100=2.05%, 250=0.05% 00:23:37.472 cpu : usr=3.00%, sys=5.19%, ctx=373, majf=0, minf=1 00:23:37.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:37.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.472 issued rwts: total=3584,3980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.472 00:23:37.472 Run status group 0 (all jobs): 00:23:37.472 READ: bw=54.8MiB/s (57.4MB/s), 9.98MiB/s-16.0MiB/s (10.5MB/s-16.7MB/s), io=55.0MiB (57.7MB), run=1002-1004msec 00:23:37.472 WRITE: bw=59.3MiB/s (62.2MB/s), 10.1MiB/s-17.9MiB/s (10.5MB/s-18.7MB/s), io=59.5MiB (62.4MB), run=1002-1004msec 00:23:37.472 00:23:37.472 Disk stats (read/write): 00:23:37.472 nvme0n1: ios=2066/2339, merge=0/0, ticks=35704/50866, in_queue=86570, util=99.70% 00:23:37.472 nvme0n2: ios=3092/3203, merge=0/0, ticks=33809/29493, in_queue=63302, util=92.64% 00:23:37.472 nvme0n3: ios=3558/3584, merge=0/0, ticks=29715/18726, in_queue=48441, util=96.39% 00:23:37.472 nvme0n4: ios=3129/3205, merge=0/0, ticks=28995/26155, in_queue=55150, util=93.33% 00:23:37.472 13:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:23:37.472 [global] 00:23:37.472 thread=1 00:23:37.472 invalidate=1 00:23:37.472 rw=randwrite 00:23:37.472 time_based=1 00:23:37.472 runtime=1 00:23:37.472 ioengine=libaio 00:23:37.472 direct=1 00:23:37.472 bs=4096 00:23:37.472 iodepth=128 00:23:37.472 norandommap=0 00:23:37.472 numjobs=1 00:23:37.472 00:23:37.472 verify_dump=1 00:23:37.472 verify_backlog=512 00:23:37.472 verify_state_save=0 00:23:37.472 do_verify=1 00:23:37.472 verify=crc32c-intel 00:23:37.472 [job0] 00:23:37.472 filename=/dev/nvme0n1 00:23:37.472 [job1] 00:23:37.472 filename=/dev/nvme0n2 00:23:37.472 [job2] 00:23:37.472 filename=/dev/nvme0n3 00:23:37.472 [job3] 00:23:37.472 filename=/dev/nvme0n4 00:23:37.472 Could not set queue depth (nvme0n1) 00:23:37.472 Could not set queue depth (nvme0n2) 00:23:37.472 Could not set queue depth (nvme0n3) 00:23:37.472 Could not set queue depth (nvme0n4) 00:23:37.730 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.730 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.730 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.730 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:37.730 fio-3.35 00:23:37.730 Starting 4 threads 00:23:39.104 00:23:39.104 job0: (groupid=0, jobs=1): err= 0: pid=1421261: Mon Jun 10 13:51:53 2024 00:23:39.104 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:23:39.104 slat (usec): min=3, max=12623, avg=107.43, stdev=677.03 00:23:39.104 clat (usec): min=5997, max=38993, avg=14858.81, stdev=4148.92 00:23:39.104 lat (usec): min=6008, max=39001, avg=14966.24, stdev=4180.71 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[11207], 20.00th=[12256], 00:23:39.104 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13960], 60.00th=[14484], 00:23:39.104 | 70.00th=[14746], 80.00th=[17171], 90.00th=[19792], 95.00th=[22414], 00:23:39.104 | 99.00th=[32113], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:23:39.104 | 99.99th=[39060] 00:23:39.104 write: IOPS=4248, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1011msec); 0 zone resets 00:23:39.104 slat (usec): min=3, max=24598, avg=104.78, stdev=700.94 00:23:39.104 clat (usec): min=1333, max=53883, avg=15623.21, stdev=7989.14 00:23:39.104 lat (usec): min=1344, max=53894, avg=15728.00, stdev=8020.05 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 5014], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[10945], 00:23:39.104 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:23:39.104 | 70.00th=[14222], 80.00th=[16909], 90.00th=[27919], 95.00th=[35390], 00:23:39.104 | 99.00th=[40109], 99.50th=[41157], 99.90th=[53216], 99.95th=[53216], 00:23:39.104 | 99.99th=[53740] 00:23:39.104 bw ( KiB/s): min=12856, max=20439, per=27.77%, avg=16647.50, stdev=5361.99, samples=2 00:23:39.104 iops : min= 3214, max= 5109, avg=4161.50, stdev=1339.97, samples=2 00:23:39.104 lat (msec) : 2=0.04%, 4=0.10%, 10=9.55%, 20=77.39%, 50=12.84% 00:23:39.104 lat (msec) : 100=0.10% 00:23:39.104 cpu : usr=5.45%, sys=6.93%, ctx=343, majf=0, minf=1 00:23:39.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:39.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:39.104 issued rwts: total=4096,4295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:39.104 job1: (groupid=0, jobs=1): err= 0: pid=1421262: Mon Jun 10 13:51:53 2024 00:23:39.104 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:23:39.104 slat (usec): min=2, max=16814, avg=112.82, stdev=667.50 00:23:39.104 clat (usec): min=8634, max=50463, avg=15066.85, stdev=7109.42 00:23:39.104 lat (usec): min=9608, max=50472, avg=15179.66, stdev=7132.89 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11469], 20.00th=[12125], 00:23:39.104 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:23:39.104 | 70.00th=[14222], 80.00th=[15008], 90.00th=[15926], 95.00th=[30802], 00:23:39.104 | 99.00th=[49021], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:23:39.104 | 99.99th=[50594] 00:23:39.104 write: IOPS=4723, BW=18.5MiB/s (19.3MB/s)(18.5MiB/1001msec); 0 zone resets 00:23:39.104 slat (usec): min=3, max=4660, avg=91.83, stdev=449.85 00:23:39.104 clat (usec): min=396, max=18326, avg=12082.95, stdev=1683.29 00:23:39.104 lat (usec): min=3884, max=18356, avg=12174.78, stdev=1658.45 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10814], 00:23:39.104 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12387], 60.00th=[13042], 00:23:39.104 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[14222], 00:23:39.104 | 99.00th=[14877], 99.50th=[15533], 99.90th=[16712], 99.95th=[17695], 00:23:39.104 | 99.99th=[18220] 00:23:39.104 bw ( KiB/s): min=20439, max=20439, per=34.09%, avg=20439.00, stdev= 0.00, samples=1 00:23:39.104 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:23:39.104 lat (usec) : 500=0.01% 00:23:39.104 lat (msec) : 4=0.13%, 10=5.22%, 20=91.14%, 50=3.16%, 100=0.34% 00:23:39.104 cpu : usr=3.90%, sys=6.50%, ctx=418, majf=0, minf=1 00:23:39.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:39.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:39.104 issued rwts: total=4608,4728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:39.104 job2: (groupid=0, jobs=1): err= 0: pid=1421264: Mon Jun 10 13:51:53 2024 00:23:39.104 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1012msec) 00:23:39.104 slat (usec): min=2, max=54985, avg=182.61, stdev=1580.38 00:23:39.104 clat (usec): min=4777, max=95081, avg=23679.13, stdev=15389.85 00:23:39.104 lat (usec): min=5109, max=95095, avg=23861.74, stdev=15495.10 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 6849], 5.00th=[10683], 10.00th=[11994], 20.00th=[13960], 00:23:39.104 | 30.00th=[15533], 40.00th=[16581], 50.00th=[18482], 60.00th=[20579], 00:23:39.104 | 70.00th=[21890], 80.00th=[32900], 90.00th=[40109], 95.00th=[52691], 00:23:39.104 | 99.00th=[79168], 99.50th=[80217], 99.90th=[88605], 99.95th=[88605], 00:23:39.104 | 99.99th=[94897] 00:23:39.104 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:23:39.104 slat (usec): min=3, max=19721, avg=112.58, stdev=713.75 00:23:39.104 clat (usec): min=1362, max=84417, avg=19241.84, stdev=12363.16 00:23:39.104 lat (usec): min=1375, max=84425, avg=19354.42, stdev=12414.33 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 4752], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[11600], 00:23:39.104 | 30.00th=[14484], 40.00th=[15795], 50.00th=[16712], 60.00th=[17171], 00:23:39.104 | 70.00th=[18482], 80.00th=[25560], 90.00th=[27919], 95.00th=[39060], 00:23:39.104 | 99.00th=[79168], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:23:39.104 | 99.99th=[84411] 00:23:39.104 bw ( KiB/s): min=12288, max=12288, per=20.50%, avg=12288.00, stdev= 0.00, samples=2 00:23:39.104 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:23:39.104 lat (msec) : 2=0.03%, 4=0.05%, 10=7.78%, 20=59.13%, 50=27.99% 00:23:39.104 lat (msec) : 100=5.01% 00:23:39.104 cpu : usr=3.26%, sys=4.95%, ctx=266, majf=0, minf=1 00:23:39.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:39.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:39.104 issued rwts: total=2930,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:39.104 job3: (groupid=0, jobs=1): err= 0: pid=1421265: Mon Jun 10 13:51:53 2024 00:23:39.104 read: IOPS=2659, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1010msec) 00:23:39.104 slat (usec): min=3, max=26443, avg=143.21, stdev=1119.60 00:23:39.104 clat (usec): min=1902, max=44000, avg=17672.32, stdev=5105.24 00:23:39.104 lat (usec): min=6767, max=44029, avg=17815.53, stdev=5154.45 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 7767], 5.00th=[12125], 10.00th=[13435], 20.00th=[13829], 00:23:39.104 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15795], 60.00th=[17433], 00:23:39.104 | 70.00th=[19792], 80.00th=[21365], 90.00th=[24773], 95.00th=[28705], 00:23:39.104 | 99.00th=[33162], 99.50th=[33162], 99.90th=[34866], 99.95th=[35390], 00:23:39.104 | 99.99th=[43779] 00:23:39.104 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:23:39.104 slat (usec): min=4, max=26585, avg=192.54, stdev=1155.34 00:23:39.104 clat (usec): min=3414, max=90132, avg=26091.95, stdev=19408.20 00:23:39.104 lat (usec): min=3427, max=90147, avg=26284.49, stdev=19535.33 00:23:39.104 clat percentiles (usec): 00:23:39.104 | 1.00th=[ 5342], 5.00th=[ 7701], 10.00th=[10683], 20.00th=[14353], 00:23:39.104 | 30.00th=[14615], 40.00th=[14877], 50.00th=[16712], 60.00th=[20317], 00:23:39.104 | 70.00th=[28705], 80.00th=[38011], 90.00th=[62129], 95.00th=[74974], 00:23:39.104 | 99.00th=[83362], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:23:39.104 | 99.99th=[89654] 00:23:39.104 bw ( KiB/s): min= 8192, max=16368, per=20.48%, avg=12280.00, stdev=5781.31, samples=2 00:23:39.104 iops : min= 2048, max= 4092, avg=3070.00, stdev=1445.33, samples=2 00:23:39.104 lat (msec) : 2=0.02%, 4=0.10%, 10=5.33%, 20=59.62%, 50=28.12% 00:23:39.104 lat (msec) : 100=6.81% 00:23:39.104 cpu : usr=3.77%, sys=4.76%, ctx=387, majf=0, minf=1 00:23:39.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:39.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:39.104 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:39.104 00:23:39.104 Run status group 0 (all jobs): 00:23:39.104 READ: bw=55.3MiB/s (58.0MB/s), 10.4MiB/s-18.0MiB/s (10.9MB/s-18.9MB/s), io=55.9MiB (58.7MB), run=1001-1012msec 00:23:39.104 WRITE: bw=58.5MiB/s (61.4MB/s), 11.9MiB/s-18.5MiB/s (12.4MB/s-19.3MB/s), io=59.2MiB (62.1MB), run=1001-1012msec 00:23:39.104 00:23:39.104 Disk stats (read/write): 00:23:39.104 nvme0n1: ios=3312/3584, merge=0/0, ticks=37686/42554, in_queue=80240, util=99.00% 00:23:39.104 nvme0n2: ios=3828/4096, merge=0/0, ticks=15135/13399, in_queue=28534, util=84.62% 00:23:39.104 nvme0n3: ios=2048/2213, merge=0/0, ticks=36738/30862, in_queue=67600, util=87.93% 00:23:39.104 nvme0n4: ios=2082/2399, merge=0/0, ticks=35594/65843, in_queue=101437, util=99.78% 00:23:39.105 13:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:23:39.105 13:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1421525 00:23:39.105 13:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:23:39.105 13:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:23:39.105 [global] 00:23:39.105 thread=1 00:23:39.105 invalidate=1 00:23:39.105 rw=read 00:23:39.105 time_based=1 00:23:39.105 runtime=10 00:23:39.105 ioengine=libaio 00:23:39.105 direct=1 00:23:39.105 bs=4096 00:23:39.105 iodepth=1 00:23:39.105 norandommap=1 00:23:39.105 numjobs=1 00:23:39.105 00:23:39.105 [job0] 00:23:39.105 filename=/dev/nvme0n1 00:23:39.105 [job1] 00:23:39.105 filename=/dev/nvme0n2 00:23:39.105 [job2] 00:23:39.105 filename=/dev/nvme0n3 00:23:39.105 [job3] 00:23:39.105 filename=/dev/nvme0n4 00:23:39.105 Could not set queue depth (nvme0n1) 00:23:39.105 Could not set queue depth (nvme0n2) 00:23:39.105 Could not set queue depth (nvme0n3) 00:23:39.105 Could not set queue depth (nvme0n4) 00:23:39.670 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:39.670 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:39.670 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:39.670 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:39.670 fio-3.35 00:23:39.670 Starting 4 threads 00:23:42.201 13:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:23:42.201 13:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:23:42.458 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1085440, buflen=4096 00:23:42.458 fio: pid=1421684, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:42.458 13:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:42.458 13:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:23:42.716 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=2129920, buflen=4096 00:23:42.716 fio: pid=1421683, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:42.716 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:42.716 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:23:42.716 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4026368, buflen=4096 00:23:42.716 fio: pid=1421681, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:42.975 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=26263552, buflen=4096 00:23:42.975 fio: pid=1421682, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:42.975 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:42.975 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:23:42.975 00:23:42.975 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1421681: Mon Jun 10 13:51:57 2024 00:23:42.975 read: IOPS=313, BW=1253KiB/s (1283kB/s)(3932KiB/3139msec) 00:23:42.975 slat (usec): min=9, max=578, avg=11.93, stdev=18.64 00:23:42.975 clat (usec): min=369, max=42396, avg=3158.22, stdev=10199.99 00:23:42.975 lat (usec): min=379, max=42912, avg=3170.14, stdev=10205.86 00:23:42.975 clat percentiles (usec): 00:23:42.975 | 1.00th=[ 383], 5.00th=[ 392], 10.00th=[ 396], 20.00th=[ 404], 00:23:42.975 | 30.00th=[ 408], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 424], 00:23:42.975 | 70.00th=[ 433], 80.00th=[ 441], 90.00th=[ 510], 95.00th=[41157], 00:23:42.975 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:42.975 | 99.99th=[42206] 00:23:42.975 bw ( KiB/s): min= 90, max= 6136, per=13.38%, avg=1305.67, stdev=2389.21, samples=6 00:23:42.975 iops : min= 22, max= 1534, avg=326.33, stdev=597.35, samples=6 00:23:42.975 lat (usec) : 500=88.31%, 750=4.78%, 1000=0.10% 00:23:42.975 lat (msec) : 50=6.71% 00:23:42.975 cpu : usr=0.35%, sys=0.45%, ctx=987, majf=0, minf=1 00:23:42.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.975 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.975 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.975 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1421682: Mon Jun 10 13:51:57 2024 00:23:42.975 read: IOPS=1912, BW=7647KiB/s (7831kB/s)(25.0MiB/3354msec) 00:23:42.975 slat (usec): min=8, max=19248, avg=17.65, stdev=345.10 00:23:42.975 clat (usec): min=319, max=40817, avg=500.77, stdev=681.13 00:23:42.975 lat (usec): min=328, max=40830, avg=518.41, stdev=766.01 00:23:42.975 clat percentiles (usec): 00:23:42.975 | 1.00th=[ 371], 5.00th=[ 429], 10.00th=[ 461], 20.00th=[ 474], 00:23:42.975 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 494], 00:23:42.975 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 529], 00:23:42.975 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 742], 99.95th=[10421], 00:23:42.975 | 99.99th=[40633] 00:23:42.975 bw ( KiB/s): min= 6960, max= 8368, per=79.29%, avg=7735.83, stdev=519.47, samples=6 00:23:42.975 iops : min= 1740, max= 2092, avg=1933.83, stdev=130.00, samples=6 00:23:42.975 lat (usec) : 500=73.85%, 750=26.04%, 1000=0.03% 00:23:42.975 lat (msec) : 20=0.03%, 50=0.03% 00:23:42.975 cpu : usr=0.92%, sys=2.27%, ctx=6418, majf=0, minf=1 00:23:42.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.975 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.975 issued rwts: total=6413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.975 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1421683: Mon Jun 10 13:51:57 2024 00:23:42.975 read: IOPS=178, BW=714KiB/s (731kB/s)(2080KiB/2912msec) 00:23:42.975 slat (usec): min=9, max=2788, avg=17.50, stdev=121.73 00:23:42.975 clat (usec): min=345, max=45862, avg=5540.16, stdev=13500.45 00:23:42.975 lat (usec): min=370, max=45893, avg=5557.65, stdev=13519.47 00:23:42.975 clat percentiles (usec): 00:23:42.975 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 392], 00:23:42.975 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 465], 00:23:42.975 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[41157], 95.00th=[41157], 00:23:42.975 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:23:42.975 | 99.99th=[45876] 00:23:42.975 bw ( KiB/s): min= 96, max= 2240, per=8.36%, avg=816.00, stdev=1015.46, samples=5 00:23:42.975 iops : min= 24, max= 560, avg=204.00, stdev=253.87, samples=5 00:23:42.975 lat (usec) : 500=76.58%, 750=10.56%, 1000=0.19% 00:23:42.975 lat (msec) : 50=12.48% 00:23:42.975 cpu : usr=0.34%, sys=0.17%, ctx=522, majf=0, minf=1 00:23:42.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.975 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.975 issued rwts: total=521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.975 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1421684: Mon Jun 10 13:51:57 2024 00:23:42.976 read: IOPS=99, BW=397KiB/s (407kB/s)(1060KiB/2667msec) 00:23:42.976 slat (nsec): min=9696, max=34565, avg=13932.99, stdev=6406.16 00:23:42.976 clat (usec): min=374, max=42288, avg=9943.06, stdev=17146.36 00:23:42.976 lat (usec): min=384, max=42322, avg=9956.95, stdev=17152.49 00:23:42.976 clat percentiles (usec): 00:23:42.976 | 1.00th=[ 429], 5.00th=[ 478], 10.00th=[ 490], 20.00th=[ 494], 00:23:42.976 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 498], 60.00th=[ 502], 00:23:42.976 | 70.00th=[ 519], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:42.976 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:42.976 | 99.99th=[42206] 00:23:42.976 bw ( KiB/s): min= 96, max= 1680, per=4.27%, avg=417.60, stdev=705.78, samples=5 00:23:42.976 iops : min= 24, max= 420, avg=104.40, stdev=176.44, samples=5 00:23:42.976 lat (usec) : 500=54.14%, 750=21.80%, 1000=0.38% 00:23:42.976 lat (msec) : 50=23.31% 00:23:42.976 cpu : usr=0.08%, sys=0.23%, ctx=266, majf=0, minf=2 00:23:42.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.976 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.976 issued rwts: total=266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.976 00:23:42.976 Run status group 0 (all jobs): 00:23:42.976 READ: bw=9756KiB/s (9990kB/s), 397KiB/s-7647KiB/s (407kB/s-7831kB/s), io=32.0MiB (33.5MB), run=2667-3354msec 00:23:42.976 00:23:42.976 Disk stats (read/write): 00:23:42.976 nvme0n1: ios=981/0, merge=0/0, ticks=3017/0, in_queue=3017, util=94.88% 00:23:42.976 nvme0n2: ios=6411/0, merge=0/0, ticks=3132/0, in_queue=3132, util=94.35% 00:23:42.976 nvme0n3: ios=518/0, merge=0/0, ticks=2797/0, in_queue=2797, util=96.25% 00:23:42.976 nvme0n4: ios=263/0, merge=0/0, ticks=2553/0, in_queue=2553, util=96.44% 00:23:43.234 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:43.234 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:23:43.493 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:43.493 13:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:23:43.752 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:43.752 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:23:44.011 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:44.011 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1421525 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:44.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:23:44.270 nvmf hotplug test: fio failed as expected 00:23:44.270 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:23:44.529 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.530 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.530 rmmod nvme_tcp 00:23:44.530 rmmod nvme_fabrics 00:23:44.530 rmmod nvme_keyring 00:23:44.530 13:51:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1418177 ']' 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1418177 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 1418177 ']' 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 1418177 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1418177 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1418177' 00:23:44.789 killing process with pid 1418177 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 1418177 00:23:44.789 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 1418177 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.048 13:51:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.949 13:52:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.949 00:23:46.949 real 0m32.051s 00:23:46.949 user 2m23.739s 00:23:46.949 sys 0m11.883s 00:23:46.949 13:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:46.949 13:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.949 ************************************ 00:23:46.949 END TEST nvmf_fio_target 00:23:46.949 ************************************ 00:23:46.949 13:52:01 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:46.949 13:52:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:46.949 13:52:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:46.949 13:52:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.207 ************************************ 00:23:47.207 START TEST nvmf_bdevio 00:23:47.207 ************************************ 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:47.208 * Looking for test storage... 00:23:47.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.208 13:52:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.192 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.192 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.192 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.192 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.192 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.192 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:57.193 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:57.193 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:57.193 Found net devices under 0000:af:00.0: cvl_0_0 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:57.193 Found net devices under 0000:af:00.1: cvl_0_1 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.193 13:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:23:57.193 00:23:57.193 --- 10.0.0.2 ping statistics --- 00:23:57.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.193 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:23:57.193 00:23:57.193 --- 10.0.0.1 ping statistics --- 00:23:57.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.193 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1426947 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1426947 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 1426947 ']' 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:57.193 13:52:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.193 [2024-06-10 13:52:10.237888] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:23:57.193 [2024-06-10 13:52:10.237935] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.193 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.194 [2024-06-10 13:52:10.348003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.194 [2024-06-10 13:52:10.433859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.194 [2024-06-10 13:52:10.433899] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.194 [2024-06-10 13:52:10.433912] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.194 [2024-06-10 13:52:10.433924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.194 [2024-06-10 13:52:10.433934] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.194 [2024-06-10 13:52:10.434060] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.194 [2024-06-10 13:52:10.434169] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:23:57.194 [2024-06-10 13:52:10.434279] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.194 [2024-06-10 13:52:10.434279] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.194 [2024-06-10 13:52:11.206834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.194 Malloc0 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:57.194 [2024-06-10 13:52:11.254661] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.194 { 00:23:57.194 "params": { 00:23:57.194 "name": "Nvme$subsystem", 00:23:57.194 "trtype": "$TEST_TRANSPORT", 00:23:57.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.194 "adrfam": "ipv4", 00:23:57.194 "trsvcid": "$NVMF_PORT", 00:23:57.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.194 "hdgst": ${hdgst:-false}, 00:23:57.194 "ddgst": ${ddgst:-false} 00:23:57.194 }, 00:23:57.194 "method": "bdev_nvme_attach_controller" 00:23:57.194 } 00:23:57.194 EOF 00:23:57.194 )") 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:23:57.194 13:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.194 "params": { 00:23:57.194 "name": "Nvme1", 00:23:57.194 "trtype": "tcp", 00:23:57.194 "traddr": "10.0.0.2", 00:23:57.194 "adrfam": "ipv4", 00:23:57.194 "trsvcid": "4420", 00:23:57.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.194 "hdgst": false, 00:23:57.194 "ddgst": false 00:23:57.194 }, 00:23:57.194 "method": "bdev_nvme_attach_controller" 00:23:57.194 }' 00:23:57.194 [2024-06-10 13:52:11.306045] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:23:57.194 [2024-06-10 13:52:11.306105] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427230 ] 00:23:57.194 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.194 [2024-06-10 13:52:11.427981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:57.194 [2024-06-10 13:52:11.515882] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.194 [2024-06-10 13:52:11.515974] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.194 [2024-06-10 13:52:11.515978] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.451 I/O targets: 00:23:57.451 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:57.452 00:23:57.452 00:23:57.452 CUnit - A unit testing framework for C - Version 2.1-3 00:23:57.452 http://cunit.sourceforge.net/ 00:23:57.452 00:23:57.452 00:23:57.452 Suite: bdevio tests on: Nvme1n1 00:23:57.452 Test: blockdev write read block ...passed 00:23:57.452 Test: blockdev write zeroes read block ...passed 00:23:57.452 Test: blockdev write zeroes read no split ...passed 00:23:57.452 Test: blockdev write zeroes read split ...passed 00:23:57.709 Test: blockdev write zeroes read split partial ...passed 00:23:57.709 Test: blockdev reset ...[2024-06-10 13:52:11.936482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.709 [2024-06-10 13:52:11.936555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f35b0 (9): Bad file descriptor 00:23:57.709 [2024-06-10 13:52:11.997143] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.709 passed 00:23:57.709 Test: blockdev write read 8 blocks ...passed 00:23:57.709 Test: blockdev write read size > 128k ...passed 00:23:57.709 Test: blockdev write read invalid size ...passed 00:23:57.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:57.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:57.709 Test: blockdev write read max offset ...passed 00:23:57.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:57.709 Test: blockdev writev readv 8 blocks ...passed 00:23:57.709 Test: blockdev writev readv 30 x 1block ...passed 00:23:57.709 Test: blockdev writev readv block ...passed 00:23:57.709 Test: blockdev writev readv size > 128k ...passed 00:23:57.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:57.709 Test: blockdev comparev and writev ...[2024-06-10 13:52:12.175487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.175518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.175536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.175547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.175914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.175928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.175944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.175954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.176303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.176317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.176331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.176341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.176713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.176727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.709 [2024-06-10 13:52:12.176741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.709 [2024-06-10 13:52:12.176755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.966 passed 00:23:57.966 Test: blockdev nvme passthru rw ...passed 00:23:57.966 Test: blockdev nvme passthru vendor specific ...[2024-06-10 13:52:12.259128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.966 [2024-06-10 13:52:12.259153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.966 [2024-06-10 13:52:12.259359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.966 [2024-06-10 13:52:12.259371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.966 [2024-06-10 13:52:12.259584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.966 [2024-06-10 13:52:12.259597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.966 [2024-06-10 13:52:12.259802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.966 [2024-06-10 13:52:12.259815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.966 passed 00:23:57.966 Test: blockdev nvme admin passthru ...passed 00:23:57.966 Test: blockdev copy ...passed 00:23:57.966 00:23:57.966 Run Summary: Type Total Ran Passed Failed Inactive 00:23:57.966 suites 1 1 n/a 0 0 00:23:57.966 tests 23 23 23 0 0 00:23:57.966 asserts 152 152 152 0 n/a 00:23:57.966 00:23:57.966 Elapsed time = 1.201 seconds 00:23:58.222 13:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.223 rmmod nvme_tcp 00:23:58.223 rmmod nvme_fabrics 00:23:58.223 rmmod nvme_keyring 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1426947 ']' 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1426947 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 1426947 ']' 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 1426947 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1426947 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1426947' 00:23:58.223 killing process with pid 1426947 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 1426947 00:23:58.223 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 1426947 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.481 13:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.013 13:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.013 00:24:01.013 real 0m13.485s 00:24:01.013 user 0m14.135s 00:24:01.013 sys 0m7.267s 00:24:01.013 13:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:01.013 13:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:01.013 ************************************ 00:24:01.013 END TEST nvmf_bdevio 00:24:01.013 ************************************ 00:24:01.013 13:52:14 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:24:01.013 13:52:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:01.013 13:52:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:01.013 13:52:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.013 ************************************ 00:24:01.013 START TEST nvmf_auth_target 00:24:01.013 ************************************ 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:24:01.013 * Looking for test storage... 00:24:01.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.013 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.014 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.014 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.014 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.014 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.014 13:52:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.014 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.989 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.989 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.989 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.989 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.990 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:24:10.990 00:24:10.990 --- 10.0.0.2 ping statistics --- 00:24:10.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.990 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:24:10.990 00:24:10.990 --- 10.0.0.1 ping statistics --- 00:24:10.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.990 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.990 13:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1431934 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1431934 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1431934 ']' 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:10.990 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1431986 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=null 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=48 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=18bded1e7113e76682985ef27a01b7b61e4a5abd9cd0ba6c 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.1AH 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 18bded1e7113e76682985ef27a01b7b61e4a5abd9cd0ba6c 0 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 18bded1e7113e76682985ef27a01b7b61e4a5abd9cd0ba6c 0 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=18bded1e7113e76682985ef27a01b7b61e4a5abd9cd0ba6c 00:24:10.990 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=0 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.1AH 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.1AH 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1AH 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha512 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=64 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=f40423d7e66e260c650cd6910863df1426b3474a24692d0c3ab02012bdf95263 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.OiU 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key f40423d7e66e260c650cd6910863df1426b3474a24692d0c3ab02012bdf95263 3 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 f40423d7e66e260c650cd6910863df1426b3474a24692d0c3ab02012bdf95263 3 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=f40423d7e66e260c650cd6910863df1426b3474a24692d0c3ab02012bdf95263 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=3 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.OiU 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.OiU 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.OiU 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha256 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=32 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=f062e73f369d017ef006ac47b3d45f79 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.8ct 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key f062e73f369d017ef006ac47b3d45f79 1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 f062e73f369d017ef006ac47b3d45f79 1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=f062e73f369d017ef006ac47b3d45f79 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.8ct 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.8ct 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8ct 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha384 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=48 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=91e94326482e59e32b4865d33ff6eae14c3efe3ad6080d95 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.mdR 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 91e94326482e59e32b4865d33ff6eae14c3efe3ad6080d95 2 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 91e94326482e59e32b4865d33ff6eae14c3efe3ad6080d95 2 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=91e94326482e59e32b4865d33ff6eae14c3efe3ad6080d95 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=2 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.mdR 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.mdR 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.mdR 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha384 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=48 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=7bd845c24e0c54bef47e371d914257ed0a3a8bb36d75fec6 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.4qk 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 7bd845c24e0c54bef47e371d914257ed0a3a8bb36d75fec6 2 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 7bd845c24e0c54bef47e371d914257ed0a3a8bb36d75fec6 2 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=7bd845c24e0c54bef47e371d914257ed0a3a8bb36d75fec6 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=2 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.4qk 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.4qk 00:24:10.991 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.4qk 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha256 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=32 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=de54927418c66a92eb8aee047b102dff 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.4l3 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key de54927418c66a92eb8aee047b102dff 1 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 de54927418c66a92eb8aee047b102dff 1 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=de54927418c66a92eb8aee047b102dff 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=1 00:24:10.992 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.4l3 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.4l3 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4l3 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha512 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=64 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=45ecae152085b18c648574486b40259be5414811fbde268a467f01b7751da423 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.KPU 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 45ecae152085b18c648574486b40259be5414811fbde268a467f01b7751da423 3 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 45ecae152085b18c648574486b40259be5414811fbde268a467f01b7751da423 3 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=45ecae152085b18c648574486b40259be5414811fbde268a467f01b7751da423 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=3 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.KPU 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.KPU 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.KPU 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1431934 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1431934 ']' 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:11.250 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.251 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:11.251 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1431986 /var/tmp/host.sock 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1431986 ']' 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:24:11.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:11.508 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1AH 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1AH 00:24:11.766 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1AH 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.OiU ]] 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OiU 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OiU 00:24:12.024 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OiU 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8ct 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8ct 00:24:12.282 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8ct 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.mdR ]] 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mdR 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mdR 00:24:12.541 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mdR 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4qk 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4qk 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4qk 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4l3 ]] 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4l3 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4l3 00:24:12.870 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4l3 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KPU 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KPU 00:24:13.129 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KPU 00:24:13.388 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:24:13.388 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:13.388 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.388 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:13.388 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:13.388 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.647 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.905 00:24:13.905 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:13.905 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:13.905 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:14.164 { 00:24:14.164 "cntlid": 1, 00:24:14.164 "qid": 0, 00:24:14.164 "state": "enabled", 00:24:14.164 "listen_address": { 00:24:14.164 "trtype": "TCP", 00:24:14.164 "adrfam": "IPv4", 00:24:14.164 "traddr": "10.0.0.2", 00:24:14.164 "trsvcid": "4420" 00:24:14.164 }, 00:24:14.164 "peer_address": { 00:24:14.164 "trtype": "TCP", 00:24:14.164 "adrfam": "IPv4", 00:24:14.164 "traddr": "10.0.0.1", 00:24:14.164 "trsvcid": "58540" 00:24:14.164 }, 00:24:14.164 "auth": { 00:24:14.164 "state": "completed", 00:24:14.164 "digest": "sha256", 00:24:14.164 "dhgroup": "null" 00:24:14.164 } 00:24:14.164 } 00:24:14.164 ]' 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:14.164 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:14.423 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.423 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.423 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.682 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:15.249 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.507 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.765 00:24:15.765 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:15.765 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:15.765 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:16.023 { 00:24:16.023 "cntlid": 3, 00:24:16.023 "qid": 0, 00:24:16.023 "state": "enabled", 00:24:16.023 "listen_address": { 00:24:16.023 "trtype": "TCP", 00:24:16.023 "adrfam": "IPv4", 00:24:16.023 "traddr": "10.0.0.2", 00:24:16.023 "trsvcid": "4420" 00:24:16.023 }, 00:24:16.023 "peer_address": { 00:24:16.023 "trtype": "TCP", 00:24:16.023 "adrfam": "IPv4", 00:24:16.023 "traddr": "10.0.0.1", 00:24:16.023 "trsvcid": "58562" 00:24:16.023 }, 00:24:16.023 "auth": { 00:24:16.023 "state": "completed", 00:24:16.023 "digest": "sha256", 00:24:16.023 "dhgroup": "null" 00:24:16.023 } 00:24:16.023 } 00:24:16.023 ]' 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:16.023 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:16.282 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.282 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.282 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.282 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:17.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:17.217 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.476 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.736 00:24:17.736 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:17.736 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:17.736 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:17.736 { 00:24:17.736 "cntlid": 5, 00:24:17.736 "qid": 0, 00:24:17.736 "state": "enabled", 00:24:17.736 "listen_address": { 00:24:17.736 "trtype": "TCP", 00:24:17.736 "adrfam": "IPv4", 00:24:17.736 "traddr": "10.0.0.2", 00:24:17.736 "trsvcid": "4420" 00:24:17.736 }, 00:24:17.736 "peer_address": { 00:24:17.736 "trtype": "TCP", 00:24:17.736 "adrfam": "IPv4", 00:24:17.736 "traddr": "10.0.0.1", 00:24:17.736 "trsvcid": "58586" 00:24:17.736 }, 00:24:17.736 "auth": { 00:24:17.736 "state": "completed", 00:24:17.736 "digest": "sha256", 00:24:17.736 "dhgroup": "null" 00:24:17.736 } 00:24:17.736 } 00:24:17.736 ]' 00:24:17.736 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.995 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.254 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:18.821 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:19.080 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:19.338 00:24:19.338 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:19.338 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:19.338 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.597 { 00:24:19.597 "cntlid": 7, 00:24:19.597 "qid": 0, 00:24:19.597 "state": "enabled", 00:24:19.597 "listen_address": { 00:24:19.597 "trtype": "TCP", 00:24:19.597 "adrfam": "IPv4", 00:24:19.597 "traddr": "10.0.0.2", 00:24:19.597 "trsvcid": "4420" 00:24:19.597 }, 00:24:19.597 "peer_address": { 00:24:19.597 "trtype": "TCP", 00:24:19.597 "adrfam": "IPv4", 00:24:19.597 "traddr": "10.0.0.1", 00:24:19.597 "trsvcid": "58608" 00:24:19.597 }, 00:24:19.597 "auth": { 00:24:19.597 "state": "completed", 00:24:19.597 "digest": "sha256", 00:24:19.597 "dhgroup": "null" 00:24:19.597 } 00:24:19.597 } 00:24:19.597 ]' 00:24:19.597 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.855 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.114 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:20.681 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.940 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.198 00:24:21.198 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:21.457 { 00:24:21.457 "cntlid": 9, 00:24:21.457 "qid": 0, 00:24:21.457 "state": "enabled", 00:24:21.457 "listen_address": { 00:24:21.457 "trtype": "TCP", 00:24:21.457 "adrfam": "IPv4", 00:24:21.457 "traddr": "10.0.0.2", 00:24:21.457 "trsvcid": "4420" 00:24:21.457 }, 00:24:21.457 "peer_address": { 00:24:21.457 "trtype": "TCP", 00:24:21.457 "adrfam": "IPv4", 00:24:21.457 "traddr": "10.0.0.1", 00:24:21.457 "trsvcid": "58640" 00:24:21.457 }, 00:24:21.457 "auth": { 00:24:21.457 "state": "completed", 00:24:21.457 "digest": "sha256", 00:24:21.457 "dhgroup": "ffdhe2048" 00:24:21.457 } 00:24:21.457 } 00:24:21.457 ]' 00:24:21.457 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:21.715 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:21.715 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:21.715 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:21.715 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:21.715 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.715 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.715 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.972 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:24:22.538 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.538 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:22.539 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.539 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.539 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.539 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:22.539 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:22.539 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.798 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.056 00:24:23.056 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:23.056 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:23.056 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:23.315 { 00:24:23.315 "cntlid": 11, 00:24:23.315 "qid": 0, 00:24:23.315 "state": "enabled", 00:24:23.315 "listen_address": { 00:24:23.315 "trtype": "TCP", 00:24:23.315 "adrfam": "IPv4", 00:24:23.315 "traddr": "10.0.0.2", 00:24:23.315 "trsvcid": "4420" 00:24:23.315 }, 00:24:23.315 "peer_address": { 00:24:23.315 "trtype": "TCP", 00:24:23.315 "adrfam": "IPv4", 00:24:23.315 "traddr": "10.0.0.1", 00:24:23.315 "trsvcid": "58662" 00:24:23.315 }, 00:24:23.315 "auth": { 00:24:23.315 "state": "completed", 00:24:23.315 "digest": "sha256", 00:24:23.315 "dhgroup": "ffdhe2048" 00:24:23.315 } 00:24:23.315 } 00:24:23.315 ]' 00:24:23.315 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.574 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.832 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.400 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.660 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.918 00:24:24.918 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:24.918 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:24.918 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.176 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.176 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:25.177 { 00:24:25.177 "cntlid": 13, 00:24:25.177 "qid": 0, 00:24:25.177 "state": "enabled", 00:24:25.177 "listen_address": { 00:24:25.177 "trtype": "TCP", 00:24:25.177 "adrfam": "IPv4", 00:24:25.177 "traddr": "10.0.0.2", 00:24:25.177 "trsvcid": "4420" 00:24:25.177 }, 00:24:25.177 "peer_address": { 00:24:25.177 "trtype": "TCP", 00:24:25.177 "adrfam": "IPv4", 00:24:25.177 "traddr": "10.0.0.1", 00:24:25.177 "trsvcid": "45260" 00:24:25.177 }, 00:24:25.177 "auth": { 00:24:25.177 "state": "completed", 00:24:25.177 "digest": "sha256", 00:24:25.177 "dhgroup": "ffdhe2048" 00:24:25.177 } 00:24:25.177 } 00:24:25.177 ]' 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:25.177 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:25.435 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:25.435 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:25.435 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.435 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.435 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.694 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:26.261 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.520 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.779 00:24:26.779 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:26.779 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:26.779 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:27.037 { 00:24:27.037 "cntlid": 15, 00:24:27.037 "qid": 0, 00:24:27.037 "state": "enabled", 00:24:27.037 "listen_address": { 00:24:27.037 "trtype": "TCP", 00:24:27.037 "adrfam": "IPv4", 00:24:27.037 "traddr": "10.0.0.2", 00:24:27.037 "trsvcid": "4420" 00:24:27.037 }, 00:24:27.037 "peer_address": { 00:24:27.037 "trtype": "TCP", 00:24:27.037 "adrfam": "IPv4", 00:24:27.037 "traddr": "10.0.0.1", 00:24:27.037 "trsvcid": "45282" 00:24:27.037 }, 00:24:27.037 "auth": { 00:24:27.037 "state": "completed", 00:24:27.037 "digest": "sha256", 00:24:27.037 "dhgroup": "ffdhe2048" 00:24:27.037 } 00:24:27.037 } 00:24:27.037 ]' 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:27.037 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:27.294 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:27.294 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:27.294 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.294 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.294 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.552 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:28.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:28.118 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.377 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.635 00:24:28.635 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:28.635 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:28.635 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:28.893 { 00:24:28.893 "cntlid": 17, 00:24:28.893 "qid": 0, 00:24:28.893 "state": "enabled", 00:24:28.893 "listen_address": { 00:24:28.893 "trtype": "TCP", 00:24:28.893 "adrfam": "IPv4", 00:24:28.893 "traddr": "10.0.0.2", 00:24:28.893 "trsvcid": "4420" 00:24:28.893 }, 00:24:28.893 "peer_address": { 00:24:28.893 "trtype": "TCP", 00:24:28.893 "adrfam": "IPv4", 00:24:28.893 "traddr": "10.0.0.1", 00:24:28.893 "trsvcid": "45302" 00:24:28.893 }, 00:24:28.893 "auth": { 00:24:28.893 "state": "completed", 00:24:28.893 "digest": "sha256", 00:24:28.893 "dhgroup": "ffdhe3072" 00:24:28.893 } 00:24:28.893 } 00:24:28.893 ]' 00:24:28.893 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.150 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.408 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.973 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.231 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.488 00:24:30.746 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:30.746 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:30.746 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:30.746 { 00:24:30.746 "cntlid": 19, 00:24:30.746 "qid": 0, 00:24:30.746 "state": "enabled", 00:24:30.746 "listen_address": { 00:24:30.746 "trtype": "TCP", 00:24:30.746 "adrfam": "IPv4", 00:24:30.746 "traddr": "10.0.0.2", 00:24:30.746 "trsvcid": "4420" 00:24:30.746 }, 00:24:30.746 "peer_address": { 00:24:30.746 "trtype": "TCP", 00:24:30.746 "adrfam": "IPv4", 00:24:30.746 "traddr": "10.0.0.1", 00:24:30.746 "trsvcid": "45324" 00:24:30.746 }, 00:24:30.746 "auth": { 00:24:30.746 "state": "completed", 00:24:30.746 "digest": "sha256", 00:24:30.746 "dhgroup": "ffdhe3072" 00:24:30.746 } 00:24:30.746 } 00:24:30.746 ]' 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:30.746 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:31.004 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:31.004 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:31.004 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.004 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.004 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:31.262 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.827 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.085 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.343 00:24:32.343 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:32.343 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.343 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:32.601 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.601 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.601 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.601 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.601 13:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.601 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:32.601 { 00:24:32.601 "cntlid": 21, 00:24:32.601 "qid": 0, 00:24:32.601 "state": "enabled", 00:24:32.601 "listen_address": { 00:24:32.601 "trtype": "TCP", 00:24:32.601 "adrfam": "IPv4", 00:24:32.601 "traddr": "10.0.0.2", 00:24:32.601 "trsvcid": "4420" 00:24:32.601 }, 00:24:32.601 "peer_address": { 00:24:32.601 "trtype": "TCP", 00:24:32.601 "adrfam": "IPv4", 00:24:32.601 "traddr": "10.0.0.1", 00:24:32.601 "trsvcid": "45338" 00:24:32.601 }, 00:24:32.601 "auth": { 00:24:32.601 "state": "completed", 00:24:32.601 "digest": "sha256", 00:24:32.601 "dhgroup": "ffdhe3072" 00:24:32.601 } 00:24:32.601 } 00:24:32.601 ]' 00:24:32.601 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:32.601 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:32.601 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:32.917 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:32.917 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:32.917 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.917 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.917 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.917 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.856 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.115 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.115 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:34.115 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:34.373 00:24:34.373 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:34.373 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:34.373 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.631 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:34.632 { 00:24:34.632 "cntlid": 23, 00:24:34.632 "qid": 0, 00:24:34.632 "state": "enabled", 00:24:34.632 "listen_address": { 00:24:34.632 "trtype": "TCP", 00:24:34.632 "adrfam": "IPv4", 00:24:34.632 "traddr": "10.0.0.2", 00:24:34.632 "trsvcid": "4420" 00:24:34.632 }, 00:24:34.632 "peer_address": { 00:24:34.632 "trtype": "TCP", 00:24:34.632 "adrfam": "IPv4", 00:24:34.632 "traddr": "10.0.0.1", 00:24:34.632 "trsvcid": "44350" 00:24:34.632 }, 00:24:34.632 "auth": { 00:24:34.632 "state": "completed", 00:24:34.632 "digest": "sha256", 00:24:34.632 "dhgroup": "ffdhe3072" 00:24:34.632 } 00:24:34.632 } 00:24:34.632 ]' 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:34.632 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:34.632 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.632 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.632 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.889 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.823 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.823 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.081 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:36.339 { 00:24:36.339 "cntlid": 25, 00:24:36.339 "qid": 0, 00:24:36.339 "state": "enabled", 00:24:36.339 "listen_address": { 00:24:36.339 "trtype": "TCP", 00:24:36.339 "adrfam": "IPv4", 00:24:36.339 "traddr": "10.0.0.2", 00:24:36.339 "trsvcid": "4420" 00:24:36.339 }, 00:24:36.339 "peer_address": { 00:24:36.339 "trtype": "TCP", 00:24:36.339 "adrfam": "IPv4", 00:24:36.339 "traddr": "10.0.0.1", 00:24:36.339 "trsvcid": "44372" 00:24:36.339 }, 00:24:36.339 "auth": { 00:24:36.339 "state": "completed", 00:24:36.339 "digest": "sha256", 00:24:36.339 "dhgroup": "ffdhe4096" 00:24:36.339 } 00:24:36.339 } 00:24:36.339 ]' 00:24:36.339 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.598 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:36.856 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:24:37.421 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:37.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:37.421 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:37.422 13:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.422 13:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.422 13:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.422 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:37.422 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:37.422 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.680 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.245 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:38.245 { 00:24:38.245 "cntlid": 27, 00:24:38.245 "qid": 0, 00:24:38.245 "state": "enabled", 00:24:38.245 "listen_address": { 00:24:38.245 "trtype": "TCP", 00:24:38.245 "adrfam": "IPv4", 00:24:38.245 "traddr": "10.0.0.2", 00:24:38.245 "trsvcid": "4420" 00:24:38.245 }, 00:24:38.245 "peer_address": { 00:24:38.245 "trtype": "TCP", 00:24:38.245 "adrfam": "IPv4", 00:24:38.245 "traddr": "10.0.0.1", 00:24:38.245 "trsvcid": "44400" 00:24:38.245 }, 00:24:38.245 "auth": { 00:24:38.245 "state": "completed", 00:24:38.245 "digest": "sha256", 00:24:38.245 "dhgroup": "ffdhe4096" 00:24:38.245 } 00:24:38.245 } 00:24:38.245 ]' 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:38.245 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:38.502 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:38.502 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:38.502 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.502 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.502 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:38.760 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.325 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.584 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:24:39.584 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:39.584 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:39.584 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.585 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.842 00:24:39.842 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:39.842 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:39.842 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:40.101 { 00:24:40.101 "cntlid": 29, 00:24:40.101 "qid": 0, 00:24:40.101 "state": "enabled", 00:24:40.101 "listen_address": { 00:24:40.101 "trtype": "TCP", 00:24:40.101 "adrfam": "IPv4", 00:24:40.101 "traddr": "10.0.0.2", 00:24:40.101 "trsvcid": "4420" 00:24:40.101 }, 00:24:40.101 "peer_address": { 00:24:40.101 "trtype": "TCP", 00:24:40.101 "adrfam": "IPv4", 00:24:40.101 "traddr": "10.0.0.1", 00:24:40.101 "trsvcid": "44440" 00:24:40.101 }, 00:24:40.101 "auth": { 00:24:40.101 "state": "completed", 00:24:40.101 "digest": "sha256", 00:24:40.101 "dhgroup": "ffdhe4096" 00:24:40.101 } 00:24:40.101 } 00:24:40.101 ]' 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:40.101 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:40.359 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:40.359 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:40.359 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:40.359 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:40.359 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:40.617 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.182 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:41.441 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:41.699 00:24:41.699 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:41.700 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:41.700 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:41.958 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:41.959 { 00:24:41.959 "cntlid": 31, 00:24:41.959 "qid": 0, 00:24:41.959 "state": "enabled", 00:24:41.959 "listen_address": { 00:24:41.959 "trtype": "TCP", 00:24:41.959 "adrfam": "IPv4", 00:24:41.959 "traddr": "10.0.0.2", 00:24:41.959 "trsvcid": "4420" 00:24:41.959 }, 00:24:41.959 "peer_address": { 00:24:41.959 "trtype": "TCP", 00:24:41.959 "adrfam": "IPv4", 00:24:41.959 "traddr": "10.0.0.1", 00:24:41.959 "trsvcid": "44468" 00:24:41.959 }, 00:24:41.959 "auth": { 00:24:41.959 "state": "completed", 00:24:41.959 "digest": "sha256", 00:24:41.959 "dhgroup": "ffdhe4096" 00:24:41.959 } 00:24:41.959 } 00:24:41.959 ]' 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:41.959 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:42.218 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.218 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.218 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.218 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:43.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.152 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.410 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.669 00:24:43.669 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:43.669 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:43.669 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.927 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.927 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:43.927 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.927 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.927 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.928 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:43.928 { 00:24:43.928 "cntlid": 33, 00:24:43.928 "qid": 0, 00:24:43.928 "state": "enabled", 00:24:43.928 "listen_address": { 00:24:43.928 "trtype": "TCP", 00:24:43.928 "adrfam": "IPv4", 00:24:43.928 "traddr": "10.0.0.2", 00:24:43.928 "trsvcid": "4420" 00:24:43.928 }, 00:24:43.928 "peer_address": { 00:24:43.928 "trtype": "TCP", 00:24:43.928 "adrfam": "IPv4", 00:24:43.928 "traddr": "10.0.0.1", 00:24:43.928 "trsvcid": "44484" 00:24:43.928 }, 00:24:43.928 "auth": { 00:24:43.928 "state": "completed", 00:24:43.928 "digest": "sha256", 00:24:43.928 "dhgroup": "ffdhe6144" 00:24:43.928 } 00:24:43.928 } 00:24:43.928 ]' 00:24:43.928 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:43.928 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:43.928 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:43.928 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:43.928 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:44.186 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:44.186 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.186 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.187 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.123 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.691 00:24:45.691 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:45.691 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:45.691 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:45.950 { 00:24:45.950 "cntlid": 35, 00:24:45.950 "qid": 0, 00:24:45.950 "state": "enabled", 00:24:45.950 "listen_address": { 00:24:45.950 "trtype": "TCP", 00:24:45.950 "adrfam": "IPv4", 00:24:45.950 "traddr": "10.0.0.2", 00:24:45.950 "trsvcid": "4420" 00:24:45.950 }, 00:24:45.950 "peer_address": { 00:24:45.950 "trtype": "TCP", 00:24:45.950 "adrfam": "IPv4", 00:24:45.950 "traddr": "10.0.0.1", 00:24:45.950 "trsvcid": "52664" 00:24:45.950 }, 00:24:45.950 "auth": { 00:24:45.950 "state": "completed", 00:24:45.950 "digest": "sha256", 00:24:45.950 "dhgroup": "ffdhe6144" 00:24:45.950 } 00:24:45.950 } 00:24:45.950 ]' 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.950 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.209 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:47.145 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.146 13:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.714 00:24:47.714 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:47.714 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:47.714 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:47.972 { 00:24:47.972 "cntlid": 37, 00:24:47.972 "qid": 0, 00:24:47.972 "state": "enabled", 00:24:47.972 "listen_address": { 00:24:47.972 "trtype": "TCP", 00:24:47.972 "adrfam": "IPv4", 00:24:47.972 "traddr": "10.0.0.2", 00:24:47.972 "trsvcid": "4420" 00:24:47.972 }, 00:24:47.972 "peer_address": { 00:24:47.972 "trtype": "TCP", 00:24:47.972 "adrfam": "IPv4", 00:24:47.972 "traddr": "10.0.0.1", 00:24:47.972 "trsvcid": "52692" 00:24:47.972 }, 00:24:47.972 "auth": { 00:24:47.972 "state": "completed", 00:24:47.972 "digest": "sha256", 00:24:47.972 "dhgroup": "ffdhe6144" 00:24:47.972 } 00:24:47.972 } 00:24:47.972 ]' 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:47.972 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.231 13:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:49.168 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:49.736 00:24:49.736 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:49.736 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:49.736 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:49.994 { 00:24:49.994 "cntlid": 39, 00:24:49.994 "qid": 0, 00:24:49.994 "state": "enabled", 00:24:49.994 "listen_address": { 00:24:49.994 "trtype": "TCP", 00:24:49.994 "adrfam": "IPv4", 00:24:49.994 "traddr": "10.0.0.2", 00:24:49.994 "trsvcid": "4420" 00:24:49.994 }, 00:24:49.994 "peer_address": { 00:24:49.994 "trtype": "TCP", 00:24:49.994 "adrfam": "IPv4", 00:24:49.994 "traddr": "10.0.0.1", 00:24:49.994 "trsvcid": "52726" 00:24:49.994 }, 00:24:49.994 "auth": { 00:24:49.994 "state": "completed", 00:24:49.994 "digest": "sha256", 00:24:49.994 "dhgroup": "ffdhe6144" 00:24:49.994 } 00:24:49.994 } 00:24:49.994 ]' 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:49.994 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.252 13:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:24:50.817 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:50.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:50.817 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:50.817 13:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:50.818 13:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.076 13:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.011 00:24:52.011 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:52.011 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:52.011 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.011 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:52.012 { 00:24:52.012 "cntlid": 41, 00:24:52.012 "qid": 0, 00:24:52.012 "state": "enabled", 00:24:52.012 "listen_address": { 00:24:52.012 "trtype": "TCP", 00:24:52.012 "adrfam": "IPv4", 00:24:52.012 "traddr": "10.0.0.2", 00:24:52.012 "trsvcid": "4420" 00:24:52.012 }, 00:24:52.012 "peer_address": { 00:24:52.012 "trtype": "TCP", 00:24:52.012 "adrfam": "IPv4", 00:24:52.012 "traddr": "10.0.0.1", 00:24:52.012 "trsvcid": "52758" 00:24:52.012 }, 00:24:52.012 "auth": { 00:24:52.012 "state": "completed", 00:24:52.012 "digest": "sha256", 00:24:52.012 "dhgroup": "ffdhe8192" 00:24:52.012 } 00:24:52.012 } 00:24:52.012 ]' 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:52.012 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:52.270 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:52.270 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:52.270 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:52.529 13:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:53.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.096 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.355 13:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.990 00:24:53.990 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:53.990 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:53.990 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:54.249 { 00:24:54.249 "cntlid": 43, 00:24:54.249 "qid": 0, 00:24:54.249 "state": "enabled", 00:24:54.249 "listen_address": { 00:24:54.249 "trtype": "TCP", 00:24:54.249 "adrfam": "IPv4", 00:24:54.249 "traddr": "10.0.0.2", 00:24:54.249 "trsvcid": "4420" 00:24:54.249 }, 00:24:54.249 "peer_address": { 00:24:54.249 "trtype": "TCP", 00:24:54.249 "adrfam": "IPv4", 00:24:54.249 "traddr": "10.0.0.1", 00:24:54.249 "trsvcid": "52768" 00:24:54.249 }, 00:24:54.249 "auth": { 00:24:54.249 "state": "completed", 00:24:54.249 "digest": "sha256", 00:24:54.249 "dhgroup": "ffdhe8192" 00:24:54.249 } 00:24:54.249 } 00:24:54.249 ]' 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:54.249 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:54.507 13:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:24:55.440 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.441 13:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.374 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:56.374 { 00:24:56.374 "cntlid": 45, 00:24:56.374 "qid": 0, 00:24:56.374 "state": "enabled", 00:24:56.374 "listen_address": { 00:24:56.374 "trtype": "TCP", 00:24:56.374 "adrfam": "IPv4", 00:24:56.374 "traddr": "10.0.0.2", 00:24:56.374 "trsvcid": "4420" 00:24:56.374 }, 00:24:56.374 "peer_address": { 00:24:56.374 "trtype": "TCP", 00:24:56.374 "adrfam": "IPv4", 00:24:56.374 "traddr": "10.0.0.1", 00:24:56.374 "trsvcid": "42504" 00:24:56.374 }, 00:24:56.374 "auth": { 00:24:56.374 "state": "completed", 00:24:56.374 "digest": "sha256", 00:24:56.374 "dhgroup": "ffdhe8192" 00:24:56.374 } 00:24:56.374 } 00:24:56.374 ]' 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:56.374 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:56.633 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:56.633 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:56.633 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:56.633 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:56.633 13:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:56.892 13:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:57.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.459 13:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:57.719 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:58.286 00:24:58.286 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:58.287 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.287 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:58.545 { 00:24:58.545 "cntlid": 47, 00:24:58.545 "qid": 0, 00:24:58.545 "state": "enabled", 00:24:58.545 "listen_address": { 00:24:58.545 "trtype": "TCP", 00:24:58.545 "adrfam": "IPv4", 00:24:58.545 "traddr": "10.0.0.2", 00:24:58.545 "trsvcid": "4420" 00:24:58.545 }, 00:24:58.545 "peer_address": { 00:24:58.545 "trtype": "TCP", 00:24:58.545 "adrfam": "IPv4", 00:24:58.545 "traddr": "10.0.0.1", 00:24:58.545 "trsvcid": "42540" 00:24:58.545 }, 00:24:58.545 "auth": { 00:24:58.545 "state": "completed", 00:24:58.545 "digest": "sha256", 00:24:58.545 "dhgroup": "ffdhe8192" 00:24:58.545 } 00:24:58.545 } 00:24:58.545 ]' 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:58.545 13:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:58.804 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:58.804 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:58.804 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:58.804 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:58.804 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:59.063 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:59.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:59.629 13:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.888 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.147 00:25:00.147 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:00.147 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:00.147 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:00.406 { 00:25:00.406 "cntlid": 49, 00:25:00.406 "qid": 0, 00:25:00.406 "state": "enabled", 00:25:00.406 "listen_address": { 00:25:00.406 "trtype": "TCP", 00:25:00.406 "adrfam": "IPv4", 00:25:00.406 "traddr": "10.0.0.2", 00:25:00.406 "trsvcid": "4420" 00:25:00.406 }, 00:25:00.406 "peer_address": { 00:25:00.406 "trtype": "TCP", 00:25:00.406 "adrfam": "IPv4", 00:25:00.406 "traddr": "10.0.0.1", 00:25:00.406 "trsvcid": "42560" 00:25:00.406 }, 00:25:00.406 "auth": { 00:25:00.406 "state": "completed", 00:25:00.406 "digest": "sha384", 00:25:00.406 "dhgroup": "null" 00:25:00.406 } 00:25:00.406 } 00:25:00.406 ]' 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:00.406 13:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:00.665 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:01.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.600 13:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.600 13:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.600 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.600 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.860 00:25:01.860 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:01.860 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:01.860 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:02.118 { 00:25:02.118 "cntlid": 51, 00:25:02.118 "qid": 0, 00:25:02.118 "state": "enabled", 00:25:02.118 "listen_address": { 00:25:02.118 "trtype": "TCP", 00:25:02.118 "adrfam": "IPv4", 00:25:02.118 "traddr": "10.0.0.2", 00:25:02.118 "trsvcid": "4420" 00:25:02.118 }, 00:25:02.118 "peer_address": { 00:25:02.118 "trtype": "TCP", 00:25:02.118 "adrfam": "IPv4", 00:25:02.118 "traddr": "10.0.0.1", 00:25:02.118 "trsvcid": "42588" 00:25:02.118 }, 00:25:02.118 "auth": { 00:25:02.118 "state": "completed", 00:25:02.118 "digest": "sha384", 00:25:02.118 "dhgroup": "null" 00:25:02.118 } 00:25:02.118 } 00:25:02.118 ]' 00:25:02.118 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:02.376 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:02.635 13:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:03.204 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.464 13:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.723 00:25:03.723 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:03.982 { 00:25:03.982 "cntlid": 53, 00:25:03.982 "qid": 0, 00:25:03.982 "state": "enabled", 00:25:03.982 "listen_address": { 00:25:03.982 "trtype": "TCP", 00:25:03.982 "adrfam": "IPv4", 00:25:03.982 "traddr": "10.0.0.2", 00:25:03.982 "trsvcid": "4420" 00:25:03.982 }, 00:25:03.982 "peer_address": { 00:25:03.982 "trtype": "TCP", 00:25:03.982 "adrfam": "IPv4", 00:25:03.982 "traddr": "10.0.0.1", 00:25:03.982 "trsvcid": "51114" 00:25:03.982 }, 00:25:03.982 "auth": { 00:25:03.982 "state": "completed", 00:25:03.982 "digest": "sha384", 00:25:03.982 "dhgroup": "null" 00:25:03.982 } 00:25:03.982 } 00:25:03.982 ]' 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:03.982 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:04.240 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:04.241 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:04.241 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:04.241 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:04.241 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:04.499 13:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:05.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:05.067 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.326 13:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.586 00:25:05.586 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:05.586 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:05.586 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:05.845 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.845 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:05.845 13:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.846 13:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.846 13:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.846 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:05.846 { 00:25:05.846 "cntlid": 55, 00:25:05.846 "qid": 0, 00:25:05.846 "state": "enabled", 00:25:05.846 "listen_address": { 00:25:05.846 "trtype": "TCP", 00:25:05.846 "adrfam": "IPv4", 00:25:05.846 "traddr": "10.0.0.2", 00:25:05.846 "trsvcid": "4420" 00:25:05.846 }, 00:25:05.846 "peer_address": { 00:25:05.846 "trtype": "TCP", 00:25:05.846 "adrfam": "IPv4", 00:25:05.846 "traddr": "10.0.0.1", 00:25:05.846 "trsvcid": "51152" 00:25:05.846 }, 00:25:05.846 "auth": { 00:25:05.846 "state": "completed", 00:25:05.846 "digest": "sha384", 00:25:05.846 "dhgroup": "null" 00:25:05.846 } 00:25:05.846 } 00:25:05.846 ]' 00:25:05.846 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:06.104 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:06.363 13:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:06.930 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:06.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:06.930 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:06.930 13:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.930 13:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.931 13:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.931 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.931 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:06.931 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:06.931 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.189 13:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.190 13:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.190 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.190 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.449 00:25:07.449 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:07.449 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:07.449 13:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:07.708 { 00:25:07.708 "cntlid": 57, 00:25:07.708 "qid": 0, 00:25:07.708 "state": "enabled", 00:25:07.708 "listen_address": { 00:25:07.708 "trtype": "TCP", 00:25:07.708 "adrfam": "IPv4", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "trsvcid": "4420" 00:25:07.708 }, 00:25:07.708 "peer_address": { 00:25:07.708 "trtype": "TCP", 00:25:07.708 "adrfam": "IPv4", 00:25:07.708 "traddr": "10.0.0.1", 00:25:07.708 "trsvcid": "51188" 00:25:07.708 }, 00:25:07.708 "auth": { 00:25:07.708 "state": "completed", 00:25:07.708 "digest": "sha384", 00:25:07.708 "dhgroup": "ffdhe2048" 00:25:07.708 } 00:25:07.708 } 00:25:07.708 ]' 00:25:07.708 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.966 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:08.225 13:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:08.791 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:08.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:08.791 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:08.791 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.791 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.791 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.792 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:08.792 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:08.792 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.051 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.309 00:25:09.309 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:09.310 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:09.310 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:09.568 { 00:25:09.568 "cntlid": 59, 00:25:09.568 "qid": 0, 00:25:09.568 "state": "enabled", 00:25:09.568 "listen_address": { 00:25:09.568 "trtype": "TCP", 00:25:09.568 "adrfam": "IPv4", 00:25:09.568 "traddr": "10.0.0.2", 00:25:09.568 "trsvcid": "4420" 00:25:09.568 }, 00:25:09.568 "peer_address": { 00:25:09.568 "trtype": "TCP", 00:25:09.568 "adrfam": "IPv4", 00:25:09.568 "traddr": "10.0.0.1", 00:25:09.568 "trsvcid": "51212" 00:25:09.568 }, 00:25:09.568 "auth": { 00:25:09.568 "state": "completed", 00:25:09.568 "digest": "sha384", 00:25:09.568 "dhgroup": "ffdhe2048" 00:25:09.568 } 00:25:09.568 } 00:25:09.568 ]' 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:09.568 13:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:09.568 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:09.568 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:09.828 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:09.828 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:09.828 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:09.828 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:10.886 13:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:10.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:10.886 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:10.886 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.886 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.886 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.886 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.887 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.146 00:25:11.146 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:11.146 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:11.146 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:11.405 { 00:25:11.405 "cntlid": 61, 00:25:11.405 "qid": 0, 00:25:11.405 "state": "enabled", 00:25:11.405 "listen_address": { 00:25:11.405 "trtype": "TCP", 00:25:11.405 "adrfam": "IPv4", 00:25:11.405 "traddr": "10.0.0.2", 00:25:11.405 "trsvcid": "4420" 00:25:11.405 }, 00:25:11.405 "peer_address": { 00:25:11.405 "trtype": "TCP", 00:25:11.405 "adrfam": "IPv4", 00:25:11.405 "traddr": "10.0.0.1", 00:25:11.405 "trsvcid": "51236" 00:25:11.405 }, 00:25:11.405 "auth": { 00:25:11.405 "state": "completed", 00:25:11.405 "digest": "sha384", 00:25:11.405 "dhgroup": "ffdhe2048" 00:25:11.405 } 00:25:11.405 } 00:25:11.405 ]' 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:11.405 13:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:11.664 13:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:12.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:12.602 13:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.861 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:13.120 00:25:13.120 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:13.120 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:13.120 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:13.379 { 00:25:13.379 "cntlid": 63, 00:25:13.379 "qid": 0, 00:25:13.379 "state": "enabled", 00:25:13.379 "listen_address": { 00:25:13.379 "trtype": "TCP", 00:25:13.379 "adrfam": "IPv4", 00:25:13.379 "traddr": "10.0.0.2", 00:25:13.379 "trsvcid": "4420" 00:25:13.379 }, 00:25:13.379 "peer_address": { 00:25:13.379 "trtype": "TCP", 00:25:13.379 "adrfam": "IPv4", 00:25:13.379 "traddr": "10.0.0.1", 00:25:13.379 "trsvcid": "51262" 00:25:13.379 }, 00:25:13.379 "auth": { 00:25:13.379 "state": "completed", 00:25:13.379 "digest": "sha384", 00:25:13.379 "dhgroup": "ffdhe2048" 00:25:13.379 } 00:25:13.379 } 00:25:13.379 ]' 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:13.379 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:13.638 13:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:14.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.575 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.576 13:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.835 00:25:14.835 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:14.835 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:14.835 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:15.093 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.093 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:15.093 13:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.093 13:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 13:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.093 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:15.093 { 00:25:15.093 "cntlid": 65, 00:25:15.093 "qid": 0, 00:25:15.093 "state": "enabled", 00:25:15.093 "listen_address": { 00:25:15.094 "trtype": "TCP", 00:25:15.094 "adrfam": "IPv4", 00:25:15.094 "traddr": "10.0.0.2", 00:25:15.094 "trsvcid": "4420" 00:25:15.094 }, 00:25:15.094 "peer_address": { 00:25:15.094 "trtype": "TCP", 00:25:15.094 "adrfam": "IPv4", 00:25:15.094 "traddr": "10.0.0.1", 00:25:15.094 "trsvcid": "51158" 00:25:15.094 }, 00:25:15.094 "auth": { 00:25:15.094 "state": "completed", 00:25:15.094 "digest": "sha384", 00:25:15.094 "dhgroup": "ffdhe3072" 00:25:15.094 } 00:25:15.094 } 00:25:15.094 ]' 00:25:15.094 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:15.094 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:15.094 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:15.094 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:15.094 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:15.351 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:15.351 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:15.351 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:15.609 13:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:16.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:16.176 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:16.435 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:25:16.435 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:16.435 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.436 13:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.694 00:25:16.694 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:16.694 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:16.694 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:16.953 { 00:25:16.953 "cntlid": 67, 00:25:16.953 "qid": 0, 00:25:16.953 "state": "enabled", 00:25:16.953 "listen_address": { 00:25:16.953 "trtype": "TCP", 00:25:16.953 "adrfam": "IPv4", 00:25:16.953 "traddr": "10.0.0.2", 00:25:16.953 "trsvcid": "4420" 00:25:16.953 }, 00:25:16.953 "peer_address": { 00:25:16.953 "trtype": "TCP", 00:25:16.953 "adrfam": "IPv4", 00:25:16.953 "traddr": "10.0.0.1", 00:25:16.953 "trsvcid": "51190" 00:25:16.953 }, 00:25:16.953 "auth": { 00:25:16.953 "state": "completed", 00:25:16.953 "digest": "sha384", 00:25:16.953 "dhgroup": "ffdhe3072" 00:25:16.953 } 00:25:16.953 } 00:25:16.953 ]' 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:16.953 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:17.213 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:17.213 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:17.213 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:17.472 13:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:18.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:18.039 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.298 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.557 00:25:18.557 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:18.557 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:18.557 13:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:18.816 { 00:25:18.816 "cntlid": 69, 00:25:18.816 "qid": 0, 00:25:18.816 "state": "enabled", 00:25:18.816 "listen_address": { 00:25:18.816 "trtype": "TCP", 00:25:18.816 "adrfam": "IPv4", 00:25:18.816 "traddr": "10.0.0.2", 00:25:18.816 "trsvcid": "4420" 00:25:18.816 }, 00:25:18.816 "peer_address": { 00:25:18.816 "trtype": "TCP", 00:25:18.816 "adrfam": "IPv4", 00:25:18.816 "traddr": "10.0.0.1", 00:25:18.816 "trsvcid": "51210" 00:25:18.816 }, 00:25:18.816 "auth": { 00:25:18.816 "state": "completed", 00:25:18.816 "digest": "sha384", 00:25:18.816 "dhgroup": "ffdhe3072" 00:25:18.816 } 00:25:18.816 } 00:25:18.816 ]' 00:25:18.816 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:19.075 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:19.334 13:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:19.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:19.902 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:20.161 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:20.420 00:25:20.420 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:20.420 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:20.420 13:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:20.679 { 00:25:20.679 "cntlid": 71, 00:25:20.679 "qid": 0, 00:25:20.679 "state": "enabled", 00:25:20.679 "listen_address": { 00:25:20.679 "trtype": "TCP", 00:25:20.679 "adrfam": "IPv4", 00:25:20.679 "traddr": "10.0.0.2", 00:25:20.679 "trsvcid": "4420" 00:25:20.679 }, 00:25:20.679 "peer_address": { 00:25:20.679 "trtype": "TCP", 00:25:20.679 "adrfam": "IPv4", 00:25:20.679 "traddr": "10.0.0.1", 00:25:20.679 "trsvcid": "51246" 00:25:20.679 }, 00:25:20.679 "auth": { 00:25:20.679 "state": "completed", 00:25:20.679 "digest": "sha384", 00:25:20.679 "dhgroup": "ffdhe3072" 00:25:20.679 } 00:25:20.679 } 00:25:20.679 ]' 00:25:20.679 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:20.937 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:21.196 13:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:21.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.763 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.023 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.282 00:25:22.540 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:22.540 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:22.540 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:22.540 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.540 13:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:22.541 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.541 13:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:22.799 { 00:25:22.799 "cntlid": 73, 00:25:22.799 "qid": 0, 00:25:22.799 "state": "enabled", 00:25:22.799 "listen_address": { 00:25:22.799 "trtype": "TCP", 00:25:22.799 "adrfam": "IPv4", 00:25:22.799 "traddr": "10.0.0.2", 00:25:22.799 "trsvcid": "4420" 00:25:22.799 }, 00:25:22.799 "peer_address": { 00:25:22.799 "trtype": "TCP", 00:25:22.799 "adrfam": "IPv4", 00:25:22.799 "traddr": "10.0.0.1", 00:25:22.799 "trsvcid": "51278" 00:25:22.799 }, 00:25:22.799 "auth": { 00:25:22.799 "state": "completed", 00:25:22.799 "digest": "sha384", 00:25:22.799 "dhgroup": "ffdhe4096" 00:25:22.799 } 00:25:22.799 } 00:25:22.799 ]' 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:22.799 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:23.058 13:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:23.626 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:23.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.885 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.144 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.144 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.144 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.403 00:25:24.403 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:24.403 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:24.403 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:24.662 { 00:25:24.662 "cntlid": 75, 00:25:24.662 "qid": 0, 00:25:24.662 "state": "enabled", 00:25:24.662 "listen_address": { 00:25:24.662 "trtype": "TCP", 00:25:24.662 "adrfam": "IPv4", 00:25:24.662 "traddr": "10.0.0.2", 00:25:24.662 "trsvcid": "4420" 00:25:24.662 }, 00:25:24.662 "peer_address": { 00:25:24.662 "trtype": "TCP", 00:25:24.662 "adrfam": "IPv4", 00:25:24.662 "traddr": "10.0.0.1", 00:25:24.662 "trsvcid": "53218" 00:25:24.662 }, 00:25:24.662 "auth": { 00:25:24.662 "state": "completed", 00:25:24.662 "digest": "sha384", 00:25:24.662 "dhgroup": "ffdhe4096" 00:25:24.662 } 00:25:24.662 } 00:25:24.662 ]' 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:24.662 13:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:24.662 13:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:24.662 13:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:24.662 13:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:24.662 13:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:24.662 13:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:24.921 13:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:25.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.858 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.426 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:26.426 { 00:25:26.426 "cntlid": 77, 00:25:26.426 "qid": 0, 00:25:26.426 "state": "enabled", 00:25:26.426 "listen_address": { 00:25:26.426 "trtype": "TCP", 00:25:26.426 "adrfam": "IPv4", 00:25:26.426 "traddr": "10.0.0.2", 00:25:26.426 "trsvcid": "4420" 00:25:26.426 }, 00:25:26.426 "peer_address": { 00:25:26.426 "trtype": "TCP", 00:25:26.426 "adrfam": "IPv4", 00:25:26.426 "traddr": "10.0.0.1", 00:25:26.426 "trsvcid": "53226" 00:25:26.426 }, 00:25:26.426 "auth": { 00:25:26.426 "state": "completed", 00:25:26.426 "digest": "sha384", 00:25:26.426 "dhgroup": "ffdhe4096" 00:25:26.426 } 00:25:26.426 } 00:25:26.426 ]' 00:25:26.426 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:26.685 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:26.685 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:26.685 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:26.685 13:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:26.685 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:26.685 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:26.685 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:26.944 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:27.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.512 13:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:27.771 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:28.338 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:28.338 { 00:25:28.338 "cntlid": 79, 00:25:28.338 "qid": 0, 00:25:28.338 "state": "enabled", 00:25:28.338 "listen_address": { 00:25:28.338 "trtype": "TCP", 00:25:28.338 "adrfam": "IPv4", 00:25:28.338 "traddr": "10.0.0.2", 00:25:28.338 "trsvcid": "4420" 00:25:28.338 }, 00:25:28.338 "peer_address": { 00:25:28.338 "trtype": "TCP", 00:25:28.338 "adrfam": "IPv4", 00:25:28.338 "traddr": "10.0.0.1", 00:25:28.338 "trsvcid": "53254" 00:25:28.338 }, 00:25:28.338 "auth": { 00:25:28.338 "state": "completed", 00:25:28.338 "digest": "sha384", 00:25:28.338 "dhgroup": "ffdhe4096" 00:25:28.338 } 00:25:28.338 } 00:25:28.338 ]' 00:25:28.338 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:28.597 13:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:28.856 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:29.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.421 13:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.680 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.247 00:25:30.247 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:30.247 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:30.247 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:30.505 { 00:25:30.505 "cntlid": 81, 00:25:30.505 "qid": 0, 00:25:30.505 "state": "enabled", 00:25:30.505 "listen_address": { 00:25:30.505 "trtype": "TCP", 00:25:30.505 "adrfam": "IPv4", 00:25:30.505 "traddr": "10.0.0.2", 00:25:30.505 "trsvcid": "4420" 00:25:30.505 }, 00:25:30.505 "peer_address": { 00:25:30.505 "trtype": "TCP", 00:25:30.505 "adrfam": "IPv4", 00:25:30.505 "traddr": "10.0.0.1", 00:25:30.505 "trsvcid": "53270" 00:25:30.505 }, 00:25:30.505 "auth": { 00:25:30.505 "state": "completed", 00:25:30.505 "digest": "sha384", 00:25:30.505 "dhgroup": "ffdhe6144" 00:25:30.505 } 00:25:30.505 } 00:25:30.505 ]' 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:30.505 13:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:30.764 13:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:31.332 13:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:31.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:31.332 13:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:31.332 13:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.332 13:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.591 13:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.591 13:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:31.591 13:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:31.591 13:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.591 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.159 00:25:32.159 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:32.159 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:32.159 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:32.418 { 00:25:32.418 "cntlid": 83, 00:25:32.418 "qid": 0, 00:25:32.418 "state": "enabled", 00:25:32.418 "listen_address": { 00:25:32.418 "trtype": "TCP", 00:25:32.418 "adrfam": "IPv4", 00:25:32.418 "traddr": "10.0.0.2", 00:25:32.418 "trsvcid": "4420" 00:25:32.418 }, 00:25:32.418 "peer_address": { 00:25:32.418 "trtype": "TCP", 00:25:32.418 "adrfam": "IPv4", 00:25:32.418 "traddr": "10.0.0.1", 00:25:32.418 "trsvcid": "53298" 00:25:32.418 }, 00:25:32.418 "auth": { 00:25:32.418 "state": "completed", 00:25:32.418 "digest": "sha384", 00:25:32.418 "dhgroup": "ffdhe6144" 00:25:32.418 } 00:25:32.418 } 00:25:32.418 ]' 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:32.418 13:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:32.677 13:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:33.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.614 13:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.614 13:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.873 13:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.873 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.873 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.132 00:25:34.132 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:34.132 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:34.132 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:34.391 { 00:25:34.391 "cntlid": 85, 00:25:34.391 "qid": 0, 00:25:34.391 "state": "enabled", 00:25:34.391 "listen_address": { 00:25:34.391 "trtype": "TCP", 00:25:34.391 "adrfam": "IPv4", 00:25:34.391 "traddr": "10.0.0.2", 00:25:34.391 "trsvcid": "4420" 00:25:34.391 }, 00:25:34.391 "peer_address": { 00:25:34.391 "trtype": "TCP", 00:25:34.391 "adrfam": "IPv4", 00:25:34.391 "traddr": "10.0.0.1", 00:25:34.391 "trsvcid": "43114" 00:25:34.391 }, 00:25:34.391 "auth": { 00:25:34.391 "state": "completed", 00:25:34.391 "digest": "sha384", 00:25:34.391 "dhgroup": "ffdhe6144" 00:25:34.391 } 00:25:34.391 } 00:25:34.391 ]' 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:34.391 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:34.650 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:34.650 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:34.650 13:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:34.910 13:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.478 13:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:35.738 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:36.305 00:25:36.305 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:36.305 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:36.305 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:36.564 { 00:25:36.564 "cntlid": 87, 00:25:36.564 "qid": 0, 00:25:36.564 "state": "enabled", 00:25:36.564 "listen_address": { 00:25:36.564 "trtype": "TCP", 00:25:36.564 "adrfam": "IPv4", 00:25:36.564 "traddr": "10.0.0.2", 00:25:36.564 "trsvcid": "4420" 00:25:36.564 }, 00:25:36.564 "peer_address": { 00:25:36.564 "trtype": "TCP", 00:25:36.564 "adrfam": "IPv4", 00:25:36.564 "traddr": "10.0.0.1", 00:25:36.564 "trsvcid": "43138" 00:25:36.564 }, 00:25:36.564 "auth": { 00:25:36.564 "state": "completed", 00:25:36.564 "digest": "sha384", 00:25:36.564 "dhgroup": "ffdhe6144" 00:25:36.564 } 00:25:36.564 } 00:25:36.564 ]' 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:36.564 13:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:36.823 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:37.391 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:37.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.392 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:37.392 13:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.392 13:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.652 13:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.652 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.652 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:37.652 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.652 13:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.652 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.265 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:38.553 { 00:25:38.553 "cntlid": 89, 00:25:38.553 "qid": 0, 00:25:38.553 "state": "enabled", 00:25:38.553 "listen_address": { 00:25:38.553 "trtype": "TCP", 00:25:38.553 "adrfam": "IPv4", 00:25:38.553 "traddr": "10.0.0.2", 00:25:38.553 "trsvcid": "4420" 00:25:38.553 }, 00:25:38.553 "peer_address": { 00:25:38.553 "trtype": "TCP", 00:25:38.553 "adrfam": "IPv4", 00:25:38.553 "traddr": "10.0.0.1", 00:25:38.553 "trsvcid": "43162" 00:25:38.553 }, 00:25:38.553 "auth": { 00:25:38.553 "state": "completed", 00:25:38.553 "digest": "sha384", 00:25:38.553 "dhgroup": "ffdhe8192" 00:25:38.553 } 00:25:38.553 } 00:25:38.553 ]' 00:25:38.553 13:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:38.553 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:38.553 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:38.812 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:38.812 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:38.812 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:38.812 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:38.812 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:39.071 13:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:39.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.639 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.898 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.466 00:25:40.466 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:40.466 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:40.466 13:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:40.724 { 00:25:40.724 "cntlid": 91, 00:25:40.724 "qid": 0, 00:25:40.724 "state": "enabled", 00:25:40.724 "listen_address": { 00:25:40.724 "trtype": "TCP", 00:25:40.724 "adrfam": "IPv4", 00:25:40.724 "traddr": "10.0.0.2", 00:25:40.724 "trsvcid": "4420" 00:25:40.724 }, 00:25:40.724 "peer_address": { 00:25:40.724 "trtype": "TCP", 00:25:40.724 "adrfam": "IPv4", 00:25:40.724 "traddr": "10.0.0.1", 00:25:40.724 "trsvcid": "43190" 00:25:40.724 }, 00:25:40.724 "auth": { 00:25:40.724 "state": "completed", 00:25:40.724 "digest": "sha384", 00:25:40.724 "dhgroup": "ffdhe8192" 00:25:40.724 } 00:25:40.724 } 00:25:40.724 ]' 00:25:40.724 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:40.982 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:41.240 13:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:41.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:41.807 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.065 13:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.001 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:43.001 { 00:25:43.001 "cntlid": 93, 00:25:43.001 "qid": 0, 00:25:43.001 "state": "enabled", 00:25:43.001 "listen_address": { 00:25:43.001 "trtype": "TCP", 00:25:43.001 "adrfam": "IPv4", 00:25:43.001 "traddr": "10.0.0.2", 00:25:43.001 "trsvcid": "4420" 00:25:43.001 }, 00:25:43.001 "peer_address": { 00:25:43.001 "trtype": "TCP", 00:25:43.001 "adrfam": "IPv4", 00:25:43.001 "traddr": "10.0.0.1", 00:25:43.001 "trsvcid": "43212" 00:25:43.001 }, 00:25:43.001 "auth": { 00:25:43.001 "state": "completed", 00:25:43.001 "digest": "sha384", 00:25:43.001 "dhgroup": "ffdhe8192" 00:25:43.001 } 00:25:43.001 } 00:25:43.001 ]' 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:43.001 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:43.259 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:43.259 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:43.259 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:43.517 13:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:44.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:44.083 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:44.342 13:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:44.908 00:25:44.908 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:44.908 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:44.908 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:45.169 { 00:25:45.169 "cntlid": 95, 00:25:45.169 "qid": 0, 00:25:45.169 "state": "enabled", 00:25:45.169 "listen_address": { 00:25:45.169 "trtype": "TCP", 00:25:45.169 "adrfam": "IPv4", 00:25:45.169 "traddr": "10.0.0.2", 00:25:45.169 "trsvcid": "4420" 00:25:45.169 }, 00:25:45.169 "peer_address": { 00:25:45.169 "trtype": "TCP", 00:25:45.169 "adrfam": "IPv4", 00:25:45.169 "traddr": "10.0.0.1", 00:25:45.169 "trsvcid": "59232" 00:25:45.169 }, 00:25:45.169 "auth": { 00:25:45.169 "state": "completed", 00:25:45.169 "digest": "sha384", 00:25:45.169 "dhgroup": "ffdhe8192" 00:25:45.169 } 00:25:45.169 } 00:25:45.169 ]' 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:45.169 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:45.428 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:45.428 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:45.428 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:45.428 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:45.428 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:45.686 13:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:46.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:46.253 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.511 13:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.512 13:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.512 13:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.512 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.512 13:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.770 00:25:46.770 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:46.770 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:46.770 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:47.028 { 00:25:47.028 "cntlid": 97, 00:25:47.028 "qid": 0, 00:25:47.028 "state": "enabled", 00:25:47.028 "listen_address": { 00:25:47.028 "trtype": "TCP", 00:25:47.028 "adrfam": "IPv4", 00:25:47.028 "traddr": "10.0.0.2", 00:25:47.028 "trsvcid": "4420" 00:25:47.028 }, 00:25:47.028 "peer_address": { 00:25:47.028 "trtype": "TCP", 00:25:47.028 "adrfam": "IPv4", 00:25:47.028 "traddr": "10.0.0.1", 00:25:47.028 "trsvcid": "59258" 00:25:47.028 }, 00:25:47.028 "auth": { 00:25:47.028 "state": "completed", 00:25:47.028 "digest": "sha512", 00:25:47.028 "dhgroup": "null" 00:25:47.028 } 00:25:47.028 } 00:25:47.028 ]' 00:25:47.028 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:47.286 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:47.286 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:47.286 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:47.286 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:47.286 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:47.287 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:47.287 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:47.545 13:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:48.112 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:48.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:48.112 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:48.112 13:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.112 13:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.113 13:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.113 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:48.113 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:48.113 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.371 13:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.630 00:25:48.630 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:48.630 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:48.630 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:48.888 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.888 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:48.888 13:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.888 13:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.888 13:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.888 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:48.888 { 00:25:48.888 "cntlid": 99, 00:25:48.888 "qid": 0, 00:25:48.888 "state": "enabled", 00:25:48.888 "listen_address": { 00:25:48.888 "trtype": "TCP", 00:25:48.888 "adrfam": "IPv4", 00:25:48.888 "traddr": "10.0.0.2", 00:25:48.888 "trsvcid": "4420" 00:25:48.888 }, 00:25:48.888 "peer_address": { 00:25:48.888 "trtype": "TCP", 00:25:48.888 "adrfam": "IPv4", 00:25:48.888 "traddr": "10.0.0.1", 00:25:48.889 "trsvcid": "59278" 00:25:48.889 }, 00:25:48.889 "auth": { 00:25:48.889 "state": "completed", 00:25:48.889 "digest": "sha512", 00:25:48.889 "dhgroup": "null" 00:25:48.889 } 00:25:48.889 } 00:25:48.889 ]' 00:25:48.889 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:48.889 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:48.889 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:48.889 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:48.889 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:49.148 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:49.148 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:49.148 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:49.406 13:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:49.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:49.974 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.232 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.491 00:25:50.491 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:50.491 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:50.491 13:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:50.750 { 00:25:50.750 "cntlid": 101, 00:25:50.750 "qid": 0, 00:25:50.750 "state": "enabled", 00:25:50.750 "listen_address": { 00:25:50.750 "trtype": "TCP", 00:25:50.750 "adrfam": "IPv4", 00:25:50.750 "traddr": "10.0.0.2", 00:25:50.750 "trsvcid": "4420" 00:25:50.750 }, 00:25:50.750 "peer_address": { 00:25:50.750 "trtype": "TCP", 00:25:50.750 "adrfam": "IPv4", 00:25:50.750 "traddr": "10.0.0.1", 00:25:50.750 "trsvcid": "59300" 00:25:50.750 }, 00:25:50.750 "auth": { 00:25:50.750 "state": "completed", 00:25:50.750 "digest": "sha512", 00:25:50.750 "dhgroup": "null" 00:25:50.750 } 00:25:50.750 } 00:25:50.750 ]' 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:50.750 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:51.009 13:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:51.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:51.946 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:52.205 00:25:52.205 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:52.205 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:52.205 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:52.464 { 00:25:52.464 "cntlid": 103, 00:25:52.464 "qid": 0, 00:25:52.464 "state": "enabled", 00:25:52.464 "listen_address": { 00:25:52.464 "trtype": "TCP", 00:25:52.464 "adrfam": "IPv4", 00:25:52.464 "traddr": "10.0.0.2", 00:25:52.464 "trsvcid": "4420" 00:25:52.464 }, 00:25:52.464 "peer_address": { 00:25:52.464 "trtype": "TCP", 00:25:52.464 "adrfam": "IPv4", 00:25:52.464 "traddr": "10.0.0.1", 00:25:52.464 "trsvcid": "59338" 00:25:52.464 }, 00:25:52.464 "auth": { 00:25:52.464 "state": "completed", 00:25:52.464 "digest": "sha512", 00:25:52.464 "dhgroup": "null" 00:25:52.464 } 00:25:52.464 } 00:25:52.464 ]' 00:25:52.464 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:52.722 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:52.722 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:52.722 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:52.722 13:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:52.722 13:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:52.722 13:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:52.722 13:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:52.981 13:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:25:53.548 13:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:53.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:53.548 13:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:53.548 13:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.548 13:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.548 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.548 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.548 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:53.548 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.548 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.807 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.066 00:25:54.066 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:54.066 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:54.066 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:54.325 { 00:25:54.325 "cntlid": 105, 00:25:54.325 "qid": 0, 00:25:54.325 "state": "enabled", 00:25:54.325 "listen_address": { 00:25:54.325 "trtype": "TCP", 00:25:54.325 "adrfam": "IPv4", 00:25:54.325 "traddr": "10.0.0.2", 00:25:54.325 "trsvcid": "4420" 00:25:54.325 }, 00:25:54.325 "peer_address": { 00:25:54.325 "trtype": "TCP", 00:25:54.325 "adrfam": "IPv4", 00:25:54.325 "traddr": "10.0.0.1", 00:25:54.325 "trsvcid": "33594" 00:25:54.325 }, 00:25:54.325 "auth": { 00:25:54.325 "state": "completed", 00:25:54.325 "digest": "sha512", 00:25:54.325 "dhgroup": "ffdhe2048" 00:25:54.325 } 00:25:54.325 } 00:25:54.325 ]' 00:25:54.325 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:54.583 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:54.583 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:54.583 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:54.584 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:54.584 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:54.584 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:54.584 13:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:54.842 13:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:55.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:55.411 13:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.669 13:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.670 13:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.670 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.670 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.928 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:56.187 { 00:25:56.187 "cntlid": 107, 00:25:56.187 "qid": 0, 00:25:56.187 "state": "enabled", 00:25:56.187 "listen_address": { 00:25:56.187 "trtype": "TCP", 00:25:56.187 "adrfam": "IPv4", 00:25:56.187 "traddr": "10.0.0.2", 00:25:56.187 "trsvcid": "4420" 00:25:56.187 }, 00:25:56.187 "peer_address": { 00:25:56.187 "trtype": "TCP", 00:25:56.187 "adrfam": "IPv4", 00:25:56.187 "traddr": "10.0.0.1", 00:25:56.187 "trsvcid": "33616" 00:25:56.187 }, 00:25:56.187 "auth": { 00:25:56.187 "state": "completed", 00:25:56.187 "digest": "sha512", 00:25:56.187 "dhgroup": "ffdhe2048" 00:25:56.187 } 00:25:56.187 } 00:25:56.187 ]' 00:25:56.187 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:56.446 13:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:56.705 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:25:57.272 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:57.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.530 13:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.789 00:25:57.789 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:57.789 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:57.789 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:58.048 { 00:25:58.048 "cntlid": 109, 00:25:58.048 "qid": 0, 00:25:58.048 "state": "enabled", 00:25:58.048 "listen_address": { 00:25:58.048 "trtype": "TCP", 00:25:58.048 "adrfam": "IPv4", 00:25:58.048 "traddr": "10.0.0.2", 00:25:58.048 "trsvcid": "4420" 00:25:58.048 }, 00:25:58.048 "peer_address": { 00:25:58.048 "trtype": "TCP", 00:25:58.048 "adrfam": "IPv4", 00:25:58.048 "traddr": "10.0.0.1", 00:25:58.048 "trsvcid": "33634" 00:25:58.048 }, 00:25:58.048 "auth": { 00:25:58.048 "state": "completed", 00:25:58.048 "digest": "sha512", 00:25:58.048 "dhgroup": "ffdhe2048" 00:25:58.048 } 00:25:58.048 } 00:25:58.048 ]' 00:25:58.048 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:58.307 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:58.566 13:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:59.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:59.183 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:59.442 13:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:59.702 00:25:59.702 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:59.702 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:59.702 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:59.962 { 00:25:59.962 "cntlid": 111, 00:25:59.962 "qid": 0, 00:25:59.962 "state": "enabled", 00:25:59.962 "listen_address": { 00:25:59.962 "trtype": "TCP", 00:25:59.962 "adrfam": "IPv4", 00:25:59.962 "traddr": "10.0.0.2", 00:25:59.962 "trsvcid": "4420" 00:25:59.962 }, 00:25:59.962 "peer_address": { 00:25:59.962 "trtype": "TCP", 00:25:59.962 "adrfam": "IPv4", 00:25:59.962 "traddr": "10.0.0.1", 00:25:59.962 "trsvcid": "33660" 00:25:59.962 }, 00:25:59.962 "auth": { 00:25:59.962 "state": "completed", 00:25:59.962 "digest": "sha512", 00:25:59.962 "dhgroup": "ffdhe2048" 00:25:59.962 } 00:25:59.962 } 00:25:59.962 ]' 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:59.962 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:00.221 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:00.221 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:00.221 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:00.480 13:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:01.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:01.048 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.307 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.566 00:26:01.566 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:01.566 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:01.566 13:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:01.825 { 00:26:01.825 "cntlid": 113, 00:26:01.825 "qid": 0, 00:26:01.825 "state": "enabled", 00:26:01.825 "listen_address": { 00:26:01.825 "trtype": "TCP", 00:26:01.825 "adrfam": "IPv4", 00:26:01.825 "traddr": "10.0.0.2", 00:26:01.825 "trsvcid": "4420" 00:26:01.825 }, 00:26:01.825 "peer_address": { 00:26:01.825 "trtype": "TCP", 00:26:01.825 "adrfam": "IPv4", 00:26:01.825 "traddr": "10.0.0.1", 00:26:01.825 "trsvcid": "33688" 00:26:01.825 }, 00:26:01.825 "auth": { 00:26:01.825 "state": "completed", 00:26:01.825 "digest": "sha512", 00:26:01.825 "dhgroup": "ffdhe3072" 00:26:01.825 } 00:26:01.825 } 00:26:01.825 ]' 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:01.825 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:02.085 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:02.085 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:02.085 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:02.085 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:02.085 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:02.085 13:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:03.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.025 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.594 00:26:03.594 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:03.594 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:03.594 13:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:03.594 { 00:26:03.594 "cntlid": 115, 00:26:03.594 "qid": 0, 00:26:03.594 "state": "enabled", 00:26:03.594 "listen_address": { 00:26:03.594 "trtype": "TCP", 00:26:03.594 "adrfam": "IPv4", 00:26:03.594 "traddr": "10.0.0.2", 00:26:03.594 "trsvcid": "4420" 00:26:03.594 }, 00:26:03.594 "peer_address": { 00:26:03.594 "trtype": "TCP", 00:26:03.594 "adrfam": "IPv4", 00:26:03.594 "traddr": "10.0.0.1", 00:26:03.594 "trsvcid": "33732" 00:26:03.594 }, 00:26:03.594 "auth": { 00:26:03.594 "state": "completed", 00:26:03.594 "digest": "sha512", 00:26:03.594 "dhgroup": "ffdhe3072" 00:26:03.594 } 00:26:03.594 } 00:26:03.594 ]' 00:26:03.594 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:03.852 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:04.110 13:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:04.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:04.678 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.938 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.197 00:26:05.197 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:05.197 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:05.197 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:05.456 { 00:26:05.456 "cntlid": 117, 00:26:05.456 "qid": 0, 00:26:05.456 "state": "enabled", 00:26:05.456 "listen_address": { 00:26:05.456 "trtype": "TCP", 00:26:05.456 "adrfam": "IPv4", 00:26:05.456 "traddr": "10.0.0.2", 00:26:05.456 "trsvcid": "4420" 00:26:05.456 }, 00:26:05.456 "peer_address": { 00:26:05.456 "trtype": "TCP", 00:26:05.456 "adrfam": "IPv4", 00:26:05.456 "traddr": "10.0.0.1", 00:26:05.456 "trsvcid": "60708" 00:26:05.456 }, 00:26:05.456 "auth": { 00:26:05.456 "state": "completed", 00:26:05.456 "digest": "sha512", 00:26:05.456 "dhgroup": "ffdhe3072" 00:26:05.456 } 00:26:05.456 } 00:26:05.456 ]' 00:26:05.456 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:05.715 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:05.715 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:05.715 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:05.715 13:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:05.715 13:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:05.715 13:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:05.715 13:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:05.974 13:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:26:06.543 13:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:06.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.543 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:06.802 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:07.060 00:26:07.060 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:07.060 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:07.060 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:07.318 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.318 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:07.318 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:07.318 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.318 13:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:07.318 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:07.318 { 00:26:07.318 "cntlid": 119, 00:26:07.318 "qid": 0, 00:26:07.318 "state": "enabled", 00:26:07.318 "listen_address": { 00:26:07.318 "trtype": "TCP", 00:26:07.318 "adrfam": "IPv4", 00:26:07.318 "traddr": "10.0.0.2", 00:26:07.319 "trsvcid": "4420" 00:26:07.319 }, 00:26:07.319 "peer_address": { 00:26:07.319 "trtype": "TCP", 00:26:07.319 "adrfam": "IPv4", 00:26:07.319 "traddr": "10.0.0.1", 00:26:07.319 "trsvcid": "60734" 00:26:07.319 }, 00:26:07.319 "auth": { 00:26:07.319 "state": "completed", 00:26:07.319 "digest": "sha512", 00:26:07.319 "dhgroup": "ffdhe3072" 00:26:07.319 } 00:26:07.319 } 00:26:07.319 ]' 00:26:07.319 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:07.577 13:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:07.838 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:08.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.408 13:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.667 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.926 00:26:08.926 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:08.926 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:08.926 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:09.184 { 00:26:09.184 "cntlid": 121, 00:26:09.184 "qid": 0, 00:26:09.184 "state": "enabled", 00:26:09.184 "listen_address": { 00:26:09.184 "trtype": "TCP", 00:26:09.184 "adrfam": "IPv4", 00:26:09.184 "traddr": "10.0.0.2", 00:26:09.184 "trsvcid": "4420" 00:26:09.184 }, 00:26:09.184 "peer_address": { 00:26:09.184 "trtype": "TCP", 00:26:09.184 "adrfam": "IPv4", 00:26:09.184 "traddr": "10.0.0.1", 00:26:09.184 "trsvcid": "60756" 00:26:09.184 }, 00:26:09.184 "auth": { 00:26:09.184 "state": "completed", 00:26:09.184 "digest": "sha512", 00:26:09.184 "dhgroup": "ffdhe4096" 00:26:09.184 } 00:26:09.184 } 00:26:09.184 ]' 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:09.184 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:09.443 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:09.443 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:09.443 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:09.443 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:09.443 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:09.701 13:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:26:10.268 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:10.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:10.268 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:10.268 13:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.268 13:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.268 13:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.269 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:10.269 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.269 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.527 13:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.786 00:26:10.786 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:10.786 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:10.786 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:11.045 { 00:26:11.045 "cntlid": 123, 00:26:11.045 "qid": 0, 00:26:11.045 "state": "enabled", 00:26:11.045 "listen_address": { 00:26:11.045 "trtype": "TCP", 00:26:11.045 "adrfam": "IPv4", 00:26:11.045 "traddr": "10.0.0.2", 00:26:11.045 "trsvcid": "4420" 00:26:11.045 }, 00:26:11.045 "peer_address": { 00:26:11.045 "trtype": "TCP", 00:26:11.045 "adrfam": "IPv4", 00:26:11.045 "traddr": "10.0.0.1", 00:26:11.045 "trsvcid": "60786" 00:26:11.045 }, 00:26:11.045 "auth": { 00:26:11.045 "state": "completed", 00:26:11.045 "digest": "sha512", 00:26:11.045 "dhgroup": "ffdhe4096" 00:26:11.045 } 00:26:11.045 } 00:26:11.045 ]' 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:11.045 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:11.305 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:11.305 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:11.305 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:11.305 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:11.305 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:11.564 13:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:12.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:12.132 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.392 13:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.651 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.911 13:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:13.171 { 00:26:13.171 "cntlid": 125, 00:26:13.171 "qid": 0, 00:26:13.171 "state": "enabled", 00:26:13.171 "listen_address": { 00:26:13.171 "trtype": "TCP", 00:26:13.171 "adrfam": "IPv4", 00:26:13.171 "traddr": "10.0.0.2", 00:26:13.171 "trsvcid": "4420" 00:26:13.171 }, 00:26:13.171 "peer_address": { 00:26:13.171 "trtype": "TCP", 00:26:13.171 "adrfam": "IPv4", 00:26:13.171 "traddr": "10.0.0.1", 00:26:13.171 "trsvcid": "60818" 00:26:13.171 }, 00:26:13.171 "auth": { 00:26:13.171 "state": "completed", 00:26:13.171 "digest": "sha512", 00:26:13.171 "dhgroup": "ffdhe4096" 00:26:13.171 } 00:26:13.171 } 00:26:13.171 ]' 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:13.171 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:13.430 13:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:13.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.999 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.258 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:26:14.258 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:14.258 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:14.258 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:14.258 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:14.258 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:14.259 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:26:14.259 13:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.259 13:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.259 13:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.259 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:14.259 13:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:14.827 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:14.827 { 00:26:14.827 "cntlid": 127, 00:26:14.827 "qid": 0, 00:26:14.827 "state": "enabled", 00:26:14.827 "listen_address": { 00:26:14.827 "trtype": "TCP", 00:26:14.827 "adrfam": "IPv4", 00:26:14.827 "traddr": "10.0.0.2", 00:26:14.827 "trsvcid": "4420" 00:26:14.827 }, 00:26:14.827 "peer_address": { 00:26:14.827 "trtype": "TCP", 00:26:14.827 "adrfam": "IPv4", 00:26:14.827 "traddr": "10.0.0.1", 00:26:14.827 "trsvcid": "53590" 00:26:14.827 }, 00:26:14.827 "auth": { 00:26:14.827 "state": "completed", 00:26:14.827 "digest": "sha512", 00:26:14.827 "dhgroup": "ffdhe4096" 00:26:14.827 } 00:26:14.827 } 00:26:14.827 ]' 00:26:14.827 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:15.087 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:15.346 13:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:15.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.915 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.174 13:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.742 00:26:16.742 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:16.742 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:16.742 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:17.001 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.001 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:17.001 13:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.001 13:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.001 13:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.001 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:17.001 { 00:26:17.001 "cntlid": 129, 00:26:17.002 "qid": 0, 00:26:17.002 "state": "enabled", 00:26:17.002 "listen_address": { 00:26:17.002 "trtype": "TCP", 00:26:17.002 "adrfam": "IPv4", 00:26:17.002 "traddr": "10.0.0.2", 00:26:17.002 "trsvcid": "4420" 00:26:17.002 }, 00:26:17.002 "peer_address": { 00:26:17.002 "trtype": "TCP", 00:26:17.002 "adrfam": "IPv4", 00:26:17.002 "traddr": "10.0.0.1", 00:26:17.002 "trsvcid": "53614" 00:26:17.002 }, 00:26:17.002 "auth": { 00:26:17.002 "state": "completed", 00:26:17.002 "digest": "sha512", 00:26:17.002 "dhgroup": "ffdhe6144" 00:26:17.002 } 00:26:17.002 } 00:26:17.002 ]' 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:17.002 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:17.261 13:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:18.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.208 13:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.775 00:26:18.775 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:18.775 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:18.775 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:19.035 { 00:26:19.035 "cntlid": 131, 00:26:19.035 "qid": 0, 00:26:19.035 "state": "enabled", 00:26:19.035 "listen_address": { 00:26:19.035 "trtype": "TCP", 00:26:19.035 "adrfam": "IPv4", 00:26:19.035 "traddr": "10.0.0.2", 00:26:19.035 "trsvcid": "4420" 00:26:19.035 }, 00:26:19.035 "peer_address": { 00:26:19.035 "trtype": "TCP", 00:26:19.035 "adrfam": "IPv4", 00:26:19.035 "traddr": "10.0.0.1", 00:26:19.035 "trsvcid": "53642" 00:26:19.035 }, 00:26:19.035 "auth": { 00:26:19.035 "state": "completed", 00:26:19.035 "digest": "sha512", 00:26:19.035 "dhgroup": "ffdhe6144" 00:26:19.035 } 00:26:19.035 } 00:26:19.035 ]' 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:19.035 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:19.294 13:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:20.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.231 13:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.800 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.800 13:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:21.059 { 00:26:21.059 "cntlid": 133, 00:26:21.059 "qid": 0, 00:26:21.059 "state": "enabled", 00:26:21.059 "listen_address": { 00:26:21.059 "trtype": "TCP", 00:26:21.059 "adrfam": "IPv4", 00:26:21.059 "traddr": "10.0.0.2", 00:26:21.059 "trsvcid": "4420" 00:26:21.059 }, 00:26:21.059 "peer_address": { 00:26:21.059 "trtype": "TCP", 00:26:21.059 "adrfam": "IPv4", 00:26:21.059 "traddr": "10.0.0.1", 00:26:21.059 "trsvcid": "53666" 00:26:21.059 }, 00:26:21.059 "auth": { 00:26:21.059 "state": "completed", 00:26:21.059 "digest": "sha512", 00:26:21.059 "dhgroup": "ffdhe6144" 00:26:21.059 } 00:26:21.059 } 00:26:21.059 ]' 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:21.059 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:21.318 13:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:26:21.886 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:22.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:22.146 13:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:22.715 00:26:22.715 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:22.715 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:22.715 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:23.026 { 00:26:23.026 "cntlid": 135, 00:26:23.026 "qid": 0, 00:26:23.026 "state": "enabled", 00:26:23.026 "listen_address": { 00:26:23.026 "trtype": "TCP", 00:26:23.026 "adrfam": "IPv4", 00:26:23.026 "traddr": "10.0.0.2", 00:26:23.026 "trsvcid": "4420" 00:26:23.026 }, 00:26:23.026 "peer_address": { 00:26:23.026 "trtype": "TCP", 00:26:23.026 "adrfam": "IPv4", 00:26:23.026 "traddr": "10.0.0.1", 00:26:23.026 "trsvcid": "53678" 00:26:23.026 }, 00:26:23.026 "auth": { 00:26:23.026 "state": "completed", 00:26:23.026 "digest": "sha512", 00:26:23.026 "dhgroup": "ffdhe6144" 00:26:23.026 } 00:26:23.026 } 00:26:23.026 ]' 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:23.026 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:23.285 13:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:24.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.222 13:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.789 00:26:24.789 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:24.789 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:24.789 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:25.047 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.047 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:25.047 13:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.047 13:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.047 13:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.047 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:25.047 { 00:26:25.047 "cntlid": 137, 00:26:25.047 "qid": 0, 00:26:25.047 "state": "enabled", 00:26:25.047 "listen_address": { 00:26:25.047 "trtype": "TCP", 00:26:25.047 "adrfam": "IPv4", 00:26:25.047 "traddr": "10.0.0.2", 00:26:25.047 "trsvcid": "4420" 00:26:25.047 }, 00:26:25.047 "peer_address": { 00:26:25.047 "trtype": "TCP", 00:26:25.047 "adrfam": "IPv4", 00:26:25.047 "traddr": "10.0.0.1", 00:26:25.047 "trsvcid": "39938" 00:26:25.047 }, 00:26:25.047 "auth": { 00:26:25.048 "state": "completed", 00:26:25.048 "digest": "sha512", 00:26:25.048 "dhgroup": "ffdhe8192" 00:26:25.048 } 00:26:25.048 } 00:26:25.048 ]' 00:26:25.048 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:25.048 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:25.048 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:25.305 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:25.305 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:25.306 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:25.306 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:25.306 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:25.565 13:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:26.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.132 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.392 13:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.960 00:26:26.960 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:26.961 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:26.961 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:27.219 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.219 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:27.219 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.219 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:27.220 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.220 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:27.220 { 00:26:27.220 "cntlid": 139, 00:26:27.220 "qid": 0, 00:26:27.220 "state": "enabled", 00:26:27.220 "listen_address": { 00:26:27.220 "trtype": "TCP", 00:26:27.220 "adrfam": "IPv4", 00:26:27.220 "traddr": "10.0.0.2", 00:26:27.220 "trsvcid": "4420" 00:26:27.220 }, 00:26:27.220 "peer_address": { 00:26:27.220 "trtype": "TCP", 00:26:27.220 "adrfam": "IPv4", 00:26:27.220 "traddr": "10.0.0.1", 00:26:27.220 "trsvcid": "39964" 00:26:27.220 }, 00:26:27.220 "auth": { 00:26:27.220 "state": "completed", 00:26:27.220 "digest": "sha512", 00:26:27.220 "dhgroup": "ffdhe8192" 00:26:27.220 } 00:26:27.220 } 00:26:27.220 ]' 00:26:27.220 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:27.220 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:27.220 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:27.479 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:27.479 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:27.479 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:27.479 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:27.479 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:27.738 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjA2MmU3M2YzNjlkMDE3ZWYwMDZhYzQ3YjNkNDVmNzmdhPgU: --dhchap-ctrl-secret DHHC-1:02:OTFlOTQzMjY0ODJlNTllMzJiNDg2NWQzM2ZmNmVhZTE0YzNlZmUzYWQ2MDgwZDk1988s/A==: 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:28.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.306 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.565 13:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.133 00:26:29.133 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:29.133 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:29.133 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:29.391 { 00:26:29.391 "cntlid": 141, 00:26:29.391 "qid": 0, 00:26:29.391 "state": "enabled", 00:26:29.391 "listen_address": { 00:26:29.391 "trtype": "TCP", 00:26:29.391 "adrfam": "IPv4", 00:26:29.391 "traddr": "10.0.0.2", 00:26:29.391 "trsvcid": "4420" 00:26:29.391 }, 00:26:29.391 "peer_address": { 00:26:29.391 "trtype": "TCP", 00:26:29.391 "adrfam": "IPv4", 00:26:29.391 "traddr": "10.0.0.1", 00:26:29.391 "trsvcid": "39992" 00:26:29.391 }, 00:26:29.391 "auth": { 00:26:29.391 "state": "completed", 00:26:29.391 "digest": "sha512", 00:26:29.391 "dhgroup": "ffdhe8192" 00:26:29.391 } 00:26:29.391 } 00:26:29.391 ]' 00:26:29.391 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:29.650 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:29.909 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:N2JkODQ1YzI0ZTBjNTRiZWY0N2UzNzFkOTE0MjU3ZWQwYTNhOGJiMzZkNzVmZWM2fiutxw==: --dhchap-ctrl-secret DHHC-1:01:ZGU1NDkyNzQxOGM2NmE5MmViOGFlZTA0N2IxMDJkZmaUxn5Q: 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:30.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:30.477 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:30.736 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:26:30.736 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:30.736 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:30.737 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:31.304 00:26:31.304 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:31.304 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:31.304 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:31.563 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.564 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:31.564 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.564 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.564 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.564 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:31.564 { 00:26:31.564 "cntlid": 143, 00:26:31.564 "qid": 0, 00:26:31.564 "state": "enabled", 00:26:31.564 "listen_address": { 00:26:31.564 "trtype": "TCP", 00:26:31.564 "adrfam": "IPv4", 00:26:31.564 "traddr": "10.0.0.2", 00:26:31.564 "trsvcid": "4420" 00:26:31.564 }, 00:26:31.564 "peer_address": { 00:26:31.564 "trtype": "TCP", 00:26:31.564 "adrfam": "IPv4", 00:26:31.564 "traddr": "10.0.0.1", 00:26:31.564 "trsvcid": "40024" 00:26:31.564 }, 00:26:31.564 "auth": { 00:26:31.564 "state": "completed", 00:26:31.564 "digest": "sha512", 00:26:31.564 "dhgroup": "ffdhe8192" 00:26:31.564 } 00:26:31.564 } 00:26:31.564 ]' 00:26:31.564 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:31.564 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:31.564 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:31.823 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:31.823 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:31.823 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:31.823 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:31.823 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:32.081 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:32.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:26:32.649 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:26:32.650 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:26:32.650 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:32.650 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:32.650 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.909 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.476 00:26:33.476 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:33.476 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:33.476 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:33.734 { 00:26:33.734 "cntlid": 145, 00:26:33.734 "qid": 0, 00:26:33.734 "state": "enabled", 00:26:33.734 "listen_address": { 00:26:33.734 "trtype": "TCP", 00:26:33.734 "adrfam": "IPv4", 00:26:33.734 "traddr": "10.0.0.2", 00:26:33.734 "trsvcid": "4420" 00:26:33.734 }, 00:26:33.734 "peer_address": { 00:26:33.734 "trtype": "TCP", 00:26:33.734 "adrfam": "IPv4", 00:26:33.734 "traddr": "10.0.0.1", 00:26:33.734 "trsvcid": "40058" 00:26:33.734 }, 00:26:33.734 "auth": { 00:26:33.734 "state": "completed", 00:26:33.734 "digest": "sha512", 00:26:33.734 "dhgroup": "ffdhe8192" 00:26:33.734 } 00:26:33.734 } 00:26:33.734 ]' 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:33.734 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:33.992 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:33.992 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:33.992 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:33.992 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:33.992 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:34.250 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MThiZGVkMWU3MTEzZTc2NjgyOTg1ZWYyN2EwMWI3YjYxZTRhNWFiZDljZDBiYTZjsu3ecA==: --dhchap-ctrl-secret DHHC-1:03:ZjQwNDIzZDdlNjZlMjYwYzY1MGNkNjkxMDg2M2RmMTQyNmIzNDc0YTI0NjkyZDBjM2FiMDIwMTJiZGY5NTI2MxYc9y8=: 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:34.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:34.818 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:35.387 request: 00:26:35.387 { 00:26:35.387 "name": "nvme0", 00:26:35.387 "trtype": "tcp", 00:26:35.387 "traddr": "10.0.0.2", 00:26:35.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:26:35.387 "adrfam": "ipv4", 00:26:35.387 "trsvcid": "4420", 00:26:35.387 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:35.387 "dhchap_key": "key2", 00:26:35.387 "method": "bdev_nvme_attach_controller", 00:26:35.387 "req_id": 1 00:26:35.387 } 00:26:35.387 Got JSON-RPC error response 00:26:35.387 response: 00:26:35.387 { 00:26:35.387 "code": -5, 00:26:35.387 "message": "Input/output error" 00:26:35.387 } 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.387 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.956 request: 00:26:35.956 { 00:26:35.956 "name": "nvme0", 00:26:35.956 "trtype": "tcp", 00:26:35.956 "traddr": "10.0.0.2", 00:26:35.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:26:35.956 "adrfam": "ipv4", 00:26:35.956 "trsvcid": "4420", 00:26:35.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:35.956 "dhchap_key": "key1", 00:26:35.956 "dhchap_ctrlr_key": "ckey2", 00:26:35.956 "method": "bdev_nvme_attach_controller", 00:26:35.956 "req_id": 1 00:26:35.956 } 00:26:35.956 Got JSON-RPC error response 00:26:35.956 response: 00:26:35.956 { 00:26:35.956 "code": -5, 00:26:35.956 "message": "Input/output error" 00:26:35.956 } 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.956 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.525 request: 00:26:36.525 { 00:26:36.525 "name": "nvme0", 00:26:36.525 "trtype": "tcp", 00:26:36.525 "traddr": "10.0.0.2", 00:26:36.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:26:36.525 "adrfam": "ipv4", 00:26:36.525 "trsvcid": "4420", 00:26:36.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:36.525 "dhchap_key": "key1", 00:26:36.525 "dhchap_ctrlr_key": "ckey1", 00:26:36.525 "method": "bdev_nvme_attach_controller", 00:26:36.525 "req_id": 1 00:26:36.525 } 00:26:36.525 Got JSON-RPC error response 00:26:36.525 response: 00:26:36.525 { 00:26:36.525 "code": -5, 00:26:36.525 "message": "Input/output error" 00:26:36.525 } 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1431934 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1431934 ']' 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1431934 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:36.525 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1431934 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1431934' 00:26:36.784 killing process with pid 1431934 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1431934 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1431934 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1458226 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1458226 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1458226 ']' 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:36.784 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.720 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:37.720 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:26:37.720 13:54:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.720 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:37.720 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1458226 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1458226 ']' 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:37.978 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:38.236 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:38.802 00:26:38.802 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:38.802 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:38.802 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:39.061 { 00:26:39.061 "cntlid": 1, 00:26:39.061 "qid": 0, 00:26:39.061 "state": "enabled", 00:26:39.061 "listen_address": { 00:26:39.061 "trtype": "TCP", 00:26:39.061 "adrfam": "IPv4", 00:26:39.061 "traddr": "10.0.0.2", 00:26:39.061 "trsvcid": "4420" 00:26:39.061 }, 00:26:39.061 "peer_address": { 00:26:39.061 "trtype": "TCP", 00:26:39.061 "adrfam": "IPv4", 00:26:39.061 "traddr": "10.0.0.1", 00:26:39.061 "trsvcid": "34960" 00:26:39.061 }, 00:26:39.061 "auth": { 00:26:39.061 "state": "completed", 00:26:39.061 "digest": "sha512", 00:26:39.061 "dhgroup": "ffdhe8192" 00:26:39.061 } 00:26:39.061 } 00:26:39.061 ]' 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:39.061 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:39.319 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:39.319 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:39.319 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:39.319 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:39.319 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:39.577 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDVlY2FlMTUyMDg1YjE4YzY0ODU3NDQ4NmI0MDI1OWJlNTQxNDgxMWZiZGUyNjhhNDY3ZjAxYjc3NTFkYTQyM2i/F0k=: 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:40.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:26:40.144 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:26:40.402 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.403 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.661 request: 00:26:40.661 { 00:26:40.661 "name": "nvme0", 00:26:40.661 "trtype": "tcp", 00:26:40.661 "traddr": "10.0.0.2", 00:26:40.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:26:40.661 "adrfam": "ipv4", 00:26:40.661 "trsvcid": "4420", 00:26:40.661 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:40.661 "dhchap_key": "key3", 00:26:40.661 "method": "bdev_nvme_attach_controller", 00:26:40.661 "req_id": 1 00:26:40.661 } 00:26:40.661 Got JSON-RPC error response 00:26:40.661 response: 00:26:40.661 { 00:26:40.661 "code": -5, 00:26:40.661 "message": "Input/output error" 00:26:40.661 } 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:26:40.661 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:40.920 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:41.179 request: 00:26:41.179 { 00:26:41.179 "name": "nvme0", 00:26:41.179 "trtype": "tcp", 00:26:41.179 "traddr": "10.0.0.2", 00:26:41.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:26:41.179 "adrfam": "ipv4", 00:26:41.179 "trsvcid": "4420", 00:26:41.179 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:41.179 "dhchap_key": "key3", 00:26:41.179 "method": "bdev_nvme_attach_controller", 00:26:41.179 "req_id": 1 00:26:41.179 } 00:26:41.179 Got JSON-RPC error response 00:26:41.179 response: 00:26:41.179 { 00:26:41.179 "code": -5, 00:26:41.179 "message": "Input/output error" 00:26:41.179 } 00:26:41.179 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:41.179 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:41.179 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:41.179 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:41.179 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:41.180 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:41.439 request: 00:26:41.439 { 00:26:41.439 "name": "nvme0", 00:26:41.439 "trtype": "tcp", 00:26:41.439 "traddr": "10.0.0.2", 00:26:41.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:26:41.439 "adrfam": "ipv4", 00:26:41.439 "trsvcid": "4420", 00:26:41.439 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:41.439 "dhchap_key": "key0", 00:26:41.439 "dhchap_ctrlr_key": "key1", 00:26:41.439 "method": "bdev_nvme_attach_controller", 00:26:41.439 "req_id": 1 00:26:41.439 } 00:26:41.439 Got JSON-RPC error response 00:26:41.439 response: 00:26:41.439 { 00:26:41.439 "code": -5, 00:26:41.439 "message": "Input/output error" 00:26:41.439 } 00:26:41.439 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:41.439 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:41.439 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:41.439 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:41.439 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:41.439 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:41.698 00:26:41.698 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:26:41.698 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:41.698 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:26:41.957 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.957 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:41.957 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1431986 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1431986 ']' 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1431986 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1431986 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1431986' 00:26:42.216 killing process with pid 1431986 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1431986 00:26:42.216 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1431986 00:26:42.785 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:26:42.785 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.785 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.785 rmmod nvme_tcp 00:26:42.785 rmmod nvme_fabrics 00:26:42.785 rmmod nvme_keyring 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1458226 ']' 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1458226 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1458226 ']' 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1458226 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1458226 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1458226' 00:26:42.785 killing process with pid 1458226 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1458226 00:26:42.785 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1458226 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.044 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.948 13:54:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.948 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1AH /tmp/spdk.key-sha256.8ct /tmp/spdk.key-sha384.4qk /tmp/spdk.key-sha512.KPU /tmp/spdk.key-sha512.OiU /tmp/spdk.key-sha384.mdR /tmp/spdk.key-sha256.4l3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:26:45.208 00:26:45.208 real 2m44.415s 00:26:45.208 user 6m6.988s 00:26:45.208 sys 0m34.831s 00:26:45.208 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:45.208 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.208 ************************************ 00:26:45.208 END TEST nvmf_auth_target 00:26:45.208 ************************************ 00:26:45.208 13:54:59 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:26:45.208 13:54:59 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:26:45.208 13:54:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:26:45.208 13:54:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:45.208 13:54:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:45.208 ************************************ 00:26:45.208 START TEST nvmf_bdevio_no_huge 00:26:45.208 ************************************ 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:26:45.208 * Looking for test storage... 00:26:45.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:26:45.208 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.209 13:54:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:55.222 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:55.222 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:55.222 Found net devices under 0000:af:00.0: cvl_0_0 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.222 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:55.223 Found net devices under 0000:af:00.1: cvl_0_1 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.223 13:55:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:26:55.223 00:26:55.223 --- 10.0.0.2 ping statistics --- 00:26:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.223 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:26:55.223 00:26:55.223 --- 10.0.0.1 ping statistics --- 00:26:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.223 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1463711 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1463711 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 1463711 ']' 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:55.223 13:55:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 [2024-06-10 13:55:08.384551] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:26:55.223 [2024-06-10 13:55:08.384619] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:26:55.223 [2024-06-10 13:55:08.522770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.223 [2024-06-10 13:55:08.654824] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.223 [2024-06-10 13:55:08.654870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.223 [2024-06-10 13:55:08.654884] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.223 [2024-06-10 13:55:08.654896] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.223 [2024-06-10 13:55:08.654906] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.223 [2024-06-10 13:55:08.655034] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.223 [2024-06-10 13:55:08.655143] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:26:55.223 [2024-06-10 13:55:08.655252] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.223 [2024-06-10 13:55:08.655252] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 [2024-06-10 13:55:09.295456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 Malloc0 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:55.223 [2024-06-10 13:55:09.343958] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.223 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.223 { 00:26:55.223 "params": { 00:26:55.223 "name": "Nvme$subsystem", 00:26:55.223 "trtype": "$TEST_TRANSPORT", 00:26:55.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.223 "adrfam": "ipv4", 00:26:55.223 "trsvcid": "$NVMF_PORT", 00:26:55.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.223 "hdgst": ${hdgst:-false}, 00:26:55.223 "ddgst": ${ddgst:-false} 00:26:55.223 }, 00:26:55.223 "method": "bdev_nvme_attach_controller" 00:26:55.223 } 00:26:55.223 EOF 00:26:55.224 )") 00:26:55.224 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:26:55.224 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:26:55.224 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:26:55.224 13:55:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:55.224 "params": { 00:26:55.224 "name": "Nvme1", 00:26:55.224 "trtype": "tcp", 00:26:55.224 "traddr": "10.0.0.2", 00:26:55.224 "adrfam": "ipv4", 00:26:55.224 "trsvcid": "4420", 00:26:55.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.224 "hdgst": false, 00:26:55.224 "ddgst": false 00:26:55.224 }, 00:26:55.224 "method": "bdev_nvme_attach_controller" 00:26:55.224 }' 00:26:55.224 [2024-06-10 13:55:09.400845] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:26:55.224 [2024-06-10 13:55:09.400909] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1463805 ] 00:26:55.224 [2024-06-10 13:55:09.526098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.224 [2024-06-10 13:55:09.659236] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.224 [2024-06-10 13:55:09.659328] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.224 [2024-06-10 13:55:09.659332] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.483 I/O targets: 00:26:55.483 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:55.483 00:26:55.483 00:26:55.483 CUnit - A unit testing framework for C - Version 2.1-3 00:26:55.483 http://cunit.sourceforge.net/ 00:26:55.483 00:26:55.483 00:26:55.483 Suite: bdevio tests on: Nvme1n1 00:26:55.483 Test: blockdev write read block ...passed 00:26:55.741 Test: blockdev write zeroes read block ...passed 00:26:55.741 Test: blockdev write zeroes read no split ...passed 00:26:55.741 Test: blockdev write zeroes read split ...passed 00:26:55.741 Test: blockdev write zeroes read split partial ...passed 00:26:55.741 Test: blockdev reset ...[2024-06-10 13:55:10.100170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.741 [2024-06-10 13:55:10.100250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a35a0 (9): Bad file descriptor 00:26:55.741 [2024-06-10 13:55:10.122057] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:55.741 passed 00:26:55.741 Test: blockdev write read 8 blocks ...passed 00:26:55.741 Test: blockdev write read size > 128k ...passed 00:26:55.741 Test: blockdev write read invalid size ...passed 00:26:55.741 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:55.741 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:55.741 Test: blockdev write read max offset ...passed 00:26:56.000 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.000 Test: blockdev writev readv 8 blocks ...passed 00:26:56.000 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.000 Test: blockdev writev readv block ...passed 00:26:56.000 Test: blockdev writev readv size > 128k ...passed 00:26:56.000 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.000 Test: blockdev comparev and writev ...[2024-06-10 13:55:10.337382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.337414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.337430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.337440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.337804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.337816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.337830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.337840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.338203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.338216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.338229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.338239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.338605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.338618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.338632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.000 [2024-06-10 13:55:10.338642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:56.000 passed 00:26:56.000 Test: blockdev nvme passthru rw ...passed 00:26:56.000 Test: blockdev nvme passthru vendor specific ...[2024-06-10 13:55:10.421027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.000 [2024-06-10 13:55:10.421046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.421258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.000 [2024-06-10 13:55:10.421269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.421469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.000 [2024-06-10 13:55:10.421480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:56.000 [2024-06-10 13:55:10.421694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.000 [2024-06-10 13:55:10.421706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:56.000 passed 00:26:56.000 Test: blockdev nvme admin passthru ...passed 00:26:56.259 Test: blockdev copy ...passed 00:26:56.259 00:26:56.259 Run Summary: Type Total Ran Passed Failed Inactive 00:26:56.259 suites 1 1 n/a 0 0 00:26:56.259 tests 23 23 23 0 0 00:26:56.259 asserts 152 152 152 0 n/a 00:26:56.259 00:26:56.259 Elapsed time = 1.220 seconds 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.518 rmmod nvme_tcp 00:26:56.518 rmmod nvme_fabrics 00:26:56.518 rmmod nvme_keyring 00:26:56.518 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1463711 ']' 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1463711 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 1463711 ']' 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 1463711 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:56.519 13:55:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1463711 00:26:56.778 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:26:56.778 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:26:56.778 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1463711' 00:26:56.778 killing process with pid 1463711 00:26:56.778 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 1463711 00:26:56.778 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 1463711 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.037 13:55:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.577 13:55:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.577 00:26:59.577 real 0m14.072s 00:26:59.577 user 0m15.584s 00:26:59.577 sys 0m8.093s 00:26:59.577 13:55:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:59.577 13:55:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:59.577 ************************************ 00:26:59.577 END TEST nvmf_bdevio_no_huge 00:26:59.577 ************************************ 00:26:59.577 13:55:13 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:26:59.577 13:55:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:59.577 13:55:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:59.577 13:55:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:59.577 ************************************ 00:26:59.577 START TEST nvmf_tls 00:26:59.577 ************************************ 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:26:59.577 * Looking for test storage... 00:26:59.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.577 13:55:13 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.578 13:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.703 13:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:07.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:07.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.703 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:07.704 Found net devices under 0000:af:00.0: cvl_0_0 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:07.704 Found net devices under 0000:af:00.1: cvl_0_1 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.704 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:27:07.963 00:27:07.963 --- 10.0.0.2 ping statistics --- 00:27:07.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.963 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:27:07.963 00:27:07.963 --- 10.0.0.1 ping statistics --- 00:27:07.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.963 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1468540 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1468540 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1468540 ']' 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:07.963 13:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:07.963 [2024-06-10 13:55:22.430175] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:07.963 [2024-06-10 13:55:22.430237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.222 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.222 [2024-06-10 13:55:22.551685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.222 [2024-06-10 13:55:22.633678] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.222 [2024-06-10 13:55:22.633720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.222 [2024-06-10 13:55:22.633733] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.222 [2024-06-10 13:55:22.633745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.222 [2024-06-10 13:55:22.633755] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.222 [2024-06-10 13:55:22.633786] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:27:09.158 true 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:09.158 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:27:09.417 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:27:09.417 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:27:09.417 13:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:27:09.676 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:09.676 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:27:09.935 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:27:09.935 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:27:09.935 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:27:10.193 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:27:10.193 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:10.452 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:27:10.452 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:27:10.452 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:10.452 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:27:10.711 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:27:10.711 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:27:10.711 13:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:27:10.969 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:10.969 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:27:10.969 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:27:10.969 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:27:10.969 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:27:11.228 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:27:11.228 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # local prefix key digest 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # digest=1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@711 -- # python - 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # local prefix key digest 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # key=ffeeddccbbaa99887766554433221100 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # digest=1 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@711 -- # python - 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.mSof1nV6oG 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.F2KW9NR7WD 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.mSof1nV6oG 00:27:11.486 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.F2KW9NR7WD 00:27:11.745 13:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:27:11.745 13:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:27:12.313 13:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.mSof1nV6oG 00:27:12.313 13:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mSof1nV6oG 00:27:12.313 13:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:12.313 [2024-06-10 13:55:26.697422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.313 13:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:12.571 13:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:12.829 [2024-06-10 13:55:27.146605] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:12.829 [2024-06-10 13:55:27.146845] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.829 13:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:13.086 malloc0 00:27:13.086 13:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:13.344 13:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mSof1nV6oG 00:27:13.602 [2024-06-10 13:55:27.833807] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:13.602 13:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mSof1nV6oG 00:27:13.602 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.581 Initializing NVMe Controllers 00:27:23.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:23.581 Initialization complete. Launching workers. 00:27:23.581 ======================================================== 00:27:23.581 Latency(us) 00:27:23.581 Device Information : IOPS MiB/s Average min max 00:27:23.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11562.90 45.17 5535.97 1209.14 8325.14 00:27:23.581 ======================================================== 00:27:23.581 Total : 11562.90 45.17 5535.97 1209.14 8325.14 00:27:23.581 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSof1nV6oG 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mSof1nV6oG' 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1471210 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1471210 /var/tmp/bdevperf.sock 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1471210 ']' 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:23.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:23.581 13:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:23.581 [2024-06-10 13:55:38.030758] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:23.581 [2024-06-10 13:55:38.030825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471210 ] 00:27:23.839 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.839 [2024-06-10 13:55:38.124656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.839 [2024-06-10 13:55:38.196770] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.774 13:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:24.774 13:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:24.774 13:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mSof1nV6oG 00:27:24.774 [2024-06-10 13:55:39.143155] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:24.774 [2024-06-10 13:55:39.143252] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:24.774 TLSTESTn1 00:27:24.774 13:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:27:25.033 Running I/O for 10 seconds... 00:27:35.006 00:27:35.006 Latency(us) 00:27:35.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.006 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:35.006 Verification LBA range: start 0x0 length 0x2000 00:27:35.006 TLSTESTn1 : 10.03 3448.86 13.47 0.00 0.00 37041.61 6448.74 59978.55 00:27:35.006 =================================================================================================================== 00:27:35.006 Total : 3448.86 13.47 0.00 0.00 37041.61 6448.74 59978.55 00:27:35.006 0 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1471210 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1471210 ']' 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1471210 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1471210 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1471210' 00:27:35.006 killing process with pid 1471210 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1471210 00:27:35.006 Received shutdown signal, test time was about 10.000000 seconds 00:27:35.006 00:27:35.006 Latency(us) 00:27:35.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.006 =================================================================================================================== 00:27:35.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:35.006 [2024-06-10 13:55:49.453697] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:35.006 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1471210 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F2KW9NR7WD 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F2KW9NR7WD 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F2KW9NR7WD 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.F2KW9NR7WD' 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1473150 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1473150 /var/tmp/bdevperf.sock 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1473150 ']' 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:35.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:35.265 13:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:35.265 [2024-06-10 13:55:49.689343] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:35.265 [2024-06-10 13:55:49.689410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473150 ] 00:27:35.552 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.552 [2024-06-10 13:55:49.784129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.552 [2024-06-10 13:55:49.857340] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.168 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:36.168 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:36.168 13:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F2KW9NR7WD 00:27:36.426 [2024-06-10 13:55:50.804917] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:36.426 [2024-06-10 13:55:50.805008] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:36.426 [2024-06-10 13:55:50.810447] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:36.426 [2024-06-10 13:55:50.811363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab6420 (107): Transport endpoint is not connected 00:27:36.426 [2024-06-10 13:55:50.812355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab6420 (9): Bad file descriptor 00:27:36.426 [2024-06-10 13:55:50.813356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.426 [2024-06-10 13:55:50.813369] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:36.426 [2024-06-10 13:55:50.813380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.426 request: 00:27:36.426 { 00:27:36.426 "name": "TLSTEST", 00:27:36.426 "trtype": "tcp", 00:27:36.426 "traddr": "10.0.0.2", 00:27:36.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.426 "adrfam": "ipv4", 00:27:36.426 "trsvcid": "4420", 00:27:36.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.426 "psk": "/tmp/tmp.F2KW9NR7WD", 00:27:36.426 "method": "bdev_nvme_attach_controller", 00:27:36.426 "req_id": 1 00:27:36.426 } 00:27:36.426 Got JSON-RPC error response 00:27:36.426 response: 00:27:36.426 { 00:27:36.426 "code": -5, 00:27:36.426 "message": "Input/output error" 00:27:36.426 } 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1473150 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1473150 ']' 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1473150 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1473150 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1473150' 00:27:36.426 killing process with pid 1473150 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1473150 00:27:36.426 Received shutdown signal, test time was about 10.000000 seconds 00:27:36.426 00:27:36.426 Latency(us) 00:27:36.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.426 =================================================================================================================== 00:27:36.426 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:36.426 [2024-06-10 13:55:50.896814] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:36.426 13:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1473150 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mSof1nV6oG 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mSof1nV6oG 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mSof1nV6oG 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mSof1nV6oG' 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1473357 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:36.684 13:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1473357 /var/tmp/bdevperf.sock 00:27:36.685 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1473357 ']' 00:27:36.685 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.685 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:36.685 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.685 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:36.685 13:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:36.685 [2024-06-10 13:55:51.121192] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:36.685 [2024-06-10 13:55:51.121258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473357 ] 00:27:36.943 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.943 [2024-06-10 13:55:51.216417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.943 [2024-06-10 13:55:51.288966] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.mSof1nV6oG 00:27:37.881 [2024-06-10 13:55:52.196345] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:37.881 [2024-06-10 13:55:52.196433] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:37.881 [2024-06-10 13:55:52.203939] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:27:37.881 [2024-06-10 13:55:52.203967] posix.c: 591:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:27:37.881 [2024-06-10 13:55:52.204000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:37.881 [2024-06-10 13:55:52.204713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d420 (107): Transport endpoint is not connected 00:27:37.881 [2024-06-10 13:55:52.205705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d420 (9): Bad file descriptor 00:27:37.881 [2024-06-10 13:55:52.206706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.881 [2024-06-10 13:55:52.206717] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:37.881 [2024-06-10 13:55:52.206728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.881 request: 00:27:37.881 { 00:27:37.881 "name": "TLSTEST", 00:27:37.881 "trtype": "tcp", 00:27:37.881 "traddr": "10.0.0.2", 00:27:37.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.881 "adrfam": "ipv4", 00:27:37.881 "trsvcid": "4420", 00:27:37.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.881 "psk": "/tmp/tmp.mSof1nV6oG", 00:27:37.881 "method": "bdev_nvme_attach_controller", 00:27:37.881 "req_id": 1 00:27:37.881 } 00:27:37.881 Got JSON-RPC error response 00:27:37.881 response: 00:27:37.881 { 00:27:37.881 "code": -5, 00:27:37.881 "message": "Input/output error" 00:27:37.881 } 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1473357 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1473357 ']' 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1473357 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1473357 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1473357' 00:27:37.881 killing process with pid 1473357 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1473357 00:27:37.881 Received shutdown signal, test time was about 10.000000 seconds 00:27:37.881 00:27:37.881 Latency(us) 00:27:37.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.881 =================================================================================================================== 00:27:37.881 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:37.881 [2024-06-10 13:55:52.293808] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:37.881 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1473357 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSof1nV6oG 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSof1nV6oG 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mSof1nV6oG 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mSof1nV6oG' 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1473629 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1473629 /var/tmp/bdevperf.sock 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1473629 ']' 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:38.141 13:55:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:38.141 [2024-06-10 13:55:52.517409] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:38.141 [2024-06-10 13:55:52.517474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473629 ] 00:27:38.141 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.141 [2024-06-10 13:55:52.611776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.401 [2024-06-10 13:55:52.683697] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.968 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:38.968 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:38.968 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mSof1nV6oG 00:27:39.227 [2024-06-10 13:55:53.622535] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:39.227 [2024-06-10 13:55:53.622614] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:39.227 [2024-06-10 13:55:53.627679] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:27:39.227 [2024-06-10 13:55:53.627707] posix.c: 591:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:27:39.227 [2024-06-10 13:55:53.627740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:39.227 [2024-06-10 13:55:53.627916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ac420 (107): Transport endpoint is not connected 00:27:39.227 [2024-06-10 13:55:53.628907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ac420 (9): Bad file descriptor 00:27:39.227 [2024-06-10 13:55:53.629908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:39.227 [2024-06-10 13:55:53.629920] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:39.227 [2024-06-10 13:55:53.629930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:39.227 request: 00:27:39.227 { 00:27:39.227 "name": "TLSTEST", 00:27:39.227 "trtype": "tcp", 00:27:39.227 "traddr": "10.0.0.2", 00:27:39.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:39.227 "adrfam": "ipv4", 00:27:39.227 "trsvcid": "4420", 00:27:39.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:39.227 "psk": "/tmp/tmp.mSof1nV6oG", 00:27:39.227 "method": "bdev_nvme_attach_controller", 00:27:39.227 "req_id": 1 00:27:39.227 } 00:27:39.227 Got JSON-RPC error response 00:27:39.227 response: 00:27:39.227 { 00:27:39.227 "code": -5, 00:27:39.227 "message": "Input/output error" 00:27:39.227 } 00:27:39.227 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1473629 00:27:39.227 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1473629 ']' 00:27:39.227 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1473629 00:27:39.227 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:39.227 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:39.227 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1473629 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1473629' 00:27:39.487 killing process with pid 1473629 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1473629 00:27:39.487 Received shutdown signal, test time was about 10.000000 seconds 00:27:39.487 00:27:39.487 Latency(us) 00:27:39.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.487 =================================================================================================================== 00:27:39.487 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:39.487 [2024-06-10 13:55:53.706824] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1473629 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1473896 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1473896 /var/tmp/bdevperf.sock 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1473896 ']' 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:39.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:39.487 13:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:39.487 [2024-06-10 13:55:53.934523] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:39.487 [2024-06-10 13:55:53.934608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473896 ] 00:27:39.747 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.747 [2024-06-10 13:55:54.029364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.747 [2024-06-10 13:55:54.099237] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.684 13:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:40.684 13:55:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:40.684 13:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:27:40.684 [2024-06-10 13:55:55.071397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:40.684 [2024-06-10 13:55:55.072788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3ead0 (9): Bad file descriptor 00:27:40.684 [2024-06-10 13:55:55.073786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.684 [2024-06-10 13:55:55.073799] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:40.684 [2024-06-10 13:55:55.073810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.684 request: 00:27:40.684 { 00:27:40.684 "name": "TLSTEST", 00:27:40.684 "trtype": "tcp", 00:27:40.684 "traddr": "10.0.0.2", 00:27:40.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.684 "adrfam": "ipv4", 00:27:40.684 "trsvcid": "4420", 00:27:40.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.684 "method": "bdev_nvme_attach_controller", 00:27:40.684 "req_id": 1 00:27:40.684 } 00:27:40.684 Got JSON-RPC error response 00:27:40.684 response: 00:27:40.684 { 00:27:40.684 "code": -5, 00:27:40.684 "message": "Input/output error" 00:27:40.684 } 00:27:40.684 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1473896 00:27:40.684 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1473896 ']' 00:27:40.684 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1473896 00:27:40.684 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:40.684 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:40.684 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1473896 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1473896' 00:27:40.948 killing process with pid 1473896 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1473896 00:27:40.948 Received shutdown signal, test time was about 10.000000 seconds 00:27:40.948 00:27:40.948 Latency(us) 00:27:40.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.948 =================================================================================================================== 00:27:40.948 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1473896 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1468540 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1468540 ']' 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1468540 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1468540 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1468540' 00:27:40.948 killing process with pid 1468540 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1468540 00:27:40.948 [2024-06-10 13:55:55.388856] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:40.948 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1468540 00:27:41.206 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:27:41.206 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # local prefix key digest 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # digest=2 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@711 -- # python - 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.s7tlBSc9pl 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.s7tlBSc9pl 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1474190 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1474190 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1474190 ']' 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:41.207 13:55:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:41.465 [2024-06-10 13:55:55.728016] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:41.465 [2024-06-10 13:55:55.728081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.465 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.465 [2024-06-10 13:55:55.846481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.465 [2024-06-10 13:55:55.922926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.465 [2024-06-10 13:55:55.922975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.465 [2024-06-10 13:55:55.922988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.465 [2024-06-10 13:55:55.923000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.465 [2024-06-10 13:55:55.923011] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.465 [2024-06-10 13:55:55.923039] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.s7tlBSc9pl 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s7tlBSc9pl 00:27:42.403 13:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:42.662 [2024-06-10 13:55:56.875943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.662 13:55:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:42.662 13:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:42.921 [2024-06-10 13:55:57.321090] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:42.921 [2024-06-10 13:55:57.321325] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.921 13:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:43.180 malloc0 00:27:43.180 13:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:43.438 13:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:27:43.697 [2024-06-10 13:55:58.008197] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s7tlBSc9pl 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.s7tlBSc9pl' 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1474722 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1474722 /var/tmp/bdevperf.sock 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1474722 ']' 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:43.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:43.697 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:43.697 [2024-06-10 13:55:58.079236] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:43.697 [2024-06-10 13:55:58.079307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474722 ] 00:27:43.697 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.957 [2024-06-10 13:55:58.175938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.957 [2024-06-10 13:55:58.247961] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.524 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:44.524 13:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:44.524 13:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:27:44.782 [2024-06-10 13:55:59.190757] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.782 [2024-06-10 13:55:59.190835] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:45.040 TLSTESTn1 00:27:45.040 13:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:27:45.040 Running I/O for 10 seconds... 00:27:55.012 00:27:55.012 Latency(us) 00:27:55.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:55.012 Verification LBA range: start 0x0 length 0x2000 00:27:55.012 TLSTESTn1 : 10.04 3435.19 13.42 0.00 0.00 37178.11 4771.02 52009.37 00:27:55.012 =================================================================================================================== 00:27:55.012 Total : 3435.19 13.42 0.00 0.00 37178.11 4771.02 52009.37 00:27:55.012 0 00:27:55.012 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.012 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1474722 00:27:55.012 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1474722 ']' 00:27:55.012 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1474722 00:27:55.012 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1474722 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1474722' 00:27:55.271 killing process with pid 1474722 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1474722 00:27:55.271 Received shutdown signal, test time was about 10.000000 seconds 00:27:55.271 00:27:55.271 Latency(us) 00:27:55.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.271 =================================================================================================================== 00:27:55.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.271 [2024-06-10 13:56:09.542511] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1474722 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.s7tlBSc9pl 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s7tlBSc9pl 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s7tlBSc9pl 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s7tlBSc9pl 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.s7tlBSc9pl' 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1476600 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1476600 /var/tmp/bdevperf.sock 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1476600 ']' 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:55.271 13:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:55.529 [2024-06-10 13:56:09.784960] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:55.530 [2024-06-10 13:56:09.785026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476600 ] 00:27:55.530 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.530 [2024-06-10 13:56:09.879127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.530 [2024-06-10 13:56:09.945692] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.466 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:56.466 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:56.466 13:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:27:56.466 [2024-06-10 13:56:10.909038] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:56.466 [2024-06-10 13:56:10.909089] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:27:56.466 [2024-06-10 13:56:10.909098] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.s7tlBSc9pl 00:27:56.466 request: 00:27:56.466 { 00:27:56.466 "name": "TLSTEST", 00:27:56.466 "trtype": "tcp", 00:27:56.466 "traddr": "10.0.0.2", 00:27:56.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:56.466 "adrfam": "ipv4", 00:27:56.466 "trsvcid": "4420", 00:27:56.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.466 "psk": "/tmp/tmp.s7tlBSc9pl", 00:27:56.466 "method": "bdev_nvme_attach_controller", 00:27:56.466 "req_id": 1 00:27:56.466 } 00:27:56.466 Got JSON-RPC error response 00:27:56.466 response: 00:27:56.466 { 00:27:56.466 "code": -1, 00:27:56.466 "message": "Operation not permitted" 00:27:56.466 } 00:27:56.724 13:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1476600 00:27:56.724 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1476600 ']' 00:27:56.724 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1476600 00:27:56.724 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1476600 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1476600' 00:27:56.725 killing process with pid 1476600 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1476600 00:27:56.725 Received shutdown signal, test time was about 10.000000 seconds 00:27:56.725 00:27:56.725 Latency(us) 00:27:56.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.725 =================================================================================================================== 00:27:56.725 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:56.725 13:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1476600 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1474190 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1474190 ']' 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1474190 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:56.725 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1474190 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1474190' 00:27:56.984 killing process with pid 1474190 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1474190 00:27:56.984 [2024-06-10 13:56:11.225188] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1474190 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1476887 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1476887 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1476887 ']' 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:56.984 13:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:57.244 [2024-06-10 13:56:11.493764] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:57.244 [2024-06-10 13:56:11.493830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.244 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.244 [2024-06-10 13:56:11.611819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.244 [2024-06-10 13:56:11.688966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.244 [2024-06-10 13:56:11.689014] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.244 [2024-06-10 13:56:11.689028] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.244 [2024-06-10 13:56:11.689040] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.244 [2024-06-10 13:56:11.689050] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.244 [2024-06-10 13:56:11.689077] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.s7tlBSc9pl 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.s7tlBSc9pl 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.s7tlBSc9pl 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s7tlBSc9pl 00:27:58.183 13:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:58.442 [2024-06-10 13:56:12.658016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.442 13:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:58.442 13:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:58.700 [2024-06-10 13:56:13.107193] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:58.700 [2024-06-10 13:56:13.107427] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.700 13:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:58.958 malloc0 00:27:58.958 13:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:59.216 13:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:27:59.476 [2024-06-10 13:56:13.774231] tcp.c:3595:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:27:59.476 [2024-06-10 13:56:13.774265] tcp.c:3681:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:27:59.476 [2024-06-10 13:56:13.774300] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:27:59.476 request: 00:27:59.476 { 00:27:59.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.476 "host": "nqn.2016-06.io.spdk:host1", 00:27:59.476 "psk": "/tmp/tmp.s7tlBSc9pl", 00:27:59.476 "method": "nvmf_subsystem_add_host", 00:27:59.476 "req_id": 1 00:27:59.476 } 00:27:59.476 Got JSON-RPC error response 00:27:59.476 response: 00:27:59.476 { 00:27:59.476 "code": -32603, 00:27:59.476 "message": "Internal error" 00:27:59.476 } 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1476887 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1476887 ']' 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1476887 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1476887 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1476887' 00:27:59.476 killing process with pid 1476887 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1476887 00:27:59.476 13:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1476887 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.s7tlBSc9pl 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1477445 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1477445 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1477445 ']' 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:59.736 13:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:59.736 [2024-06-10 13:56:14.122722] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:27:59.736 [2024-06-10 13:56:14.122784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.736 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.994 [2024-06-10 13:56:14.240729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.994 [2024-06-10 13:56:14.316875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.994 [2024-06-10 13:56:14.316937] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.994 [2024-06-10 13:56:14.316950] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.994 [2024-06-10 13:56:14.316962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.994 [2024-06-10 13:56:14.316972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.994 [2024-06-10 13:56:14.317000] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.563 13:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:00.563 13:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:00.563 13:56:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.563 13:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:00.563 13:56:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:00.821 13:56:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.821 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.s7tlBSc9pl 00:28:00.821 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s7tlBSc9pl 00:28:00.821 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:00.821 [2024-06-10 13:56:15.285701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.080 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:01.080 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:01.338 [2024-06-10 13:56:15.738896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:01.338 [2024-06-10 13:56:15.739146] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.338 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:01.597 malloc0 00:28:01.597 13:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:01.855 13:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:28:02.114 [2024-06-10 13:56:16.418068] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1477745 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1477745 /var/tmp/bdevperf.sock 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1477745 ']' 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:02.114 13:56:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:02.114 [2024-06-10 13:56:16.487397] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:02.114 [2024-06-10 13:56:16.487463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477745 ] 00:28:02.114 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.114 [2024-06-10 13:56:16.580742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.373 [2024-06-10 13:56:16.649054] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.940 13:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:02.940 13:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:02.940 13:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:28:03.198 [2024-06-10 13:56:17.583560] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:03.198 [2024-06-10 13:56:17.583657] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:03.198 TLSTESTn1 00:28:03.457 13:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:28:03.717 13:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:28:03.717 "subsystems": [ 00:28:03.717 { 00:28:03.717 "subsystem": "keyring", 00:28:03.717 "config": [] 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "subsystem": "iobuf", 00:28:03.717 "config": [ 00:28:03.717 { 00:28:03.717 "method": "iobuf_set_options", 00:28:03.717 "params": { 00:28:03.717 "small_pool_count": 8192, 00:28:03.717 "large_pool_count": 1024, 00:28:03.717 "small_bufsize": 8192, 00:28:03.717 "large_bufsize": 135168 00:28:03.717 } 00:28:03.717 } 00:28:03.717 ] 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "subsystem": "sock", 00:28:03.717 "config": [ 00:28:03.717 { 00:28:03.717 "method": "sock_set_default_impl", 00:28:03.717 "params": { 00:28:03.717 "impl_name": "posix" 00:28:03.717 } 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "method": "sock_impl_set_options", 00:28:03.717 "params": { 00:28:03.717 "impl_name": "ssl", 00:28:03.717 "recv_buf_size": 4096, 00:28:03.717 "send_buf_size": 4096, 00:28:03.717 "enable_recv_pipe": true, 00:28:03.717 "enable_quickack": false, 00:28:03.717 "enable_placement_id": 0, 00:28:03.717 "enable_zerocopy_send_server": true, 00:28:03.717 "enable_zerocopy_send_client": false, 00:28:03.717 "zerocopy_threshold": 0, 00:28:03.717 "tls_version": 0, 00:28:03.717 "enable_ktls": false, 00:28:03.717 "enable_new_session_tickets": true 00:28:03.717 } 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "method": "sock_impl_set_options", 00:28:03.717 "params": { 00:28:03.717 "impl_name": "posix", 00:28:03.717 "recv_buf_size": 2097152, 00:28:03.717 "send_buf_size": 2097152, 00:28:03.717 "enable_recv_pipe": true, 00:28:03.717 "enable_quickack": false, 00:28:03.717 "enable_placement_id": 0, 00:28:03.717 "enable_zerocopy_send_server": true, 00:28:03.717 "enable_zerocopy_send_client": false, 00:28:03.717 "zerocopy_threshold": 0, 00:28:03.717 "tls_version": 0, 00:28:03.717 "enable_ktls": false, 00:28:03.717 "enable_new_session_tickets": false 00:28:03.717 } 00:28:03.717 } 00:28:03.717 ] 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "subsystem": "vmd", 00:28:03.717 "config": [] 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "subsystem": "accel", 00:28:03.717 "config": [ 00:28:03.717 { 00:28:03.717 "method": "accel_set_options", 00:28:03.717 "params": { 00:28:03.717 "small_cache_size": 128, 00:28:03.717 "large_cache_size": 16, 00:28:03.717 "task_count": 2048, 00:28:03.717 "sequence_count": 2048, 00:28:03.717 "buf_count": 2048 00:28:03.717 } 00:28:03.717 } 00:28:03.717 ] 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "subsystem": "bdev", 00:28:03.717 "config": [ 00:28:03.717 { 00:28:03.717 "method": "bdev_set_options", 00:28:03.717 "params": { 00:28:03.717 "bdev_io_pool_size": 65535, 00:28:03.717 "bdev_io_cache_size": 256, 00:28:03.717 "bdev_auto_examine": true, 00:28:03.717 "iobuf_small_cache_size": 128, 00:28:03.717 "iobuf_large_cache_size": 16 00:28:03.717 } 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "method": "bdev_raid_set_options", 00:28:03.717 "params": { 00:28:03.717 "process_window_size_kb": 1024 00:28:03.717 } 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "method": "bdev_iscsi_set_options", 00:28:03.717 "params": { 00:28:03.717 "timeout_sec": 30 00:28:03.717 } 00:28:03.717 }, 00:28:03.717 { 00:28:03.717 "method": "bdev_nvme_set_options", 00:28:03.717 "params": { 00:28:03.717 "action_on_timeout": "none", 00:28:03.717 "timeout_us": 0, 00:28:03.717 "timeout_admin_us": 0, 00:28:03.718 "keep_alive_timeout_ms": 10000, 00:28:03.718 "arbitration_burst": 0, 00:28:03.718 "low_priority_weight": 0, 00:28:03.718 "medium_priority_weight": 0, 00:28:03.718 "high_priority_weight": 0, 00:28:03.718 "nvme_adminq_poll_period_us": 10000, 00:28:03.718 "nvme_ioq_poll_period_us": 0, 00:28:03.718 "io_queue_requests": 0, 00:28:03.718 "delay_cmd_submit": true, 00:28:03.718 "transport_retry_count": 4, 00:28:03.718 "bdev_retry_count": 3, 00:28:03.718 "transport_ack_timeout": 0, 00:28:03.718 "ctrlr_loss_timeout_sec": 0, 00:28:03.718 "reconnect_delay_sec": 0, 00:28:03.718 "fast_io_fail_timeout_sec": 0, 00:28:03.718 "disable_auto_failback": false, 00:28:03.718 "generate_uuids": false, 00:28:03.718 "transport_tos": 0, 00:28:03.718 "nvme_error_stat": false, 00:28:03.718 "rdma_srq_size": 0, 00:28:03.718 "io_path_stat": false, 00:28:03.718 "allow_accel_sequence": false, 00:28:03.718 "rdma_max_cq_size": 0, 00:28:03.718 "rdma_cm_event_timeout_ms": 0, 00:28:03.718 "dhchap_digests": [ 00:28:03.718 "sha256", 00:28:03.718 "sha384", 00:28:03.718 "sha512" 00:28:03.718 ], 00:28:03.718 "dhchap_dhgroups": [ 00:28:03.718 "null", 00:28:03.718 "ffdhe2048", 00:28:03.718 "ffdhe3072", 00:28:03.718 "ffdhe4096", 00:28:03.718 "ffdhe6144", 00:28:03.718 "ffdhe8192" 00:28:03.718 ] 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "bdev_nvme_set_hotplug", 00:28:03.718 "params": { 00:28:03.718 "period_us": 100000, 00:28:03.718 "enable": false 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "bdev_malloc_create", 00:28:03.718 "params": { 00:28:03.718 "name": "malloc0", 00:28:03.718 "num_blocks": 8192, 00:28:03.718 "block_size": 4096, 00:28:03.718 "physical_block_size": 4096, 00:28:03.718 "uuid": "cc7865a8-6294-42cc-ad4f-47610cad41ed", 00:28:03.718 "optimal_io_boundary": 0 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "bdev_wait_for_examine" 00:28:03.718 } 00:28:03.718 ] 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "subsystem": "nbd", 00:28:03.718 "config": [] 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "subsystem": "scheduler", 00:28:03.718 "config": [ 00:28:03.718 { 00:28:03.718 "method": "framework_set_scheduler", 00:28:03.718 "params": { 00:28:03.718 "name": "static" 00:28:03.718 } 00:28:03.718 } 00:28:03.718 ] 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "subsystem": "nvmf", 00:28:03.718 "config": [ 00:28:03.718 { 00:28:03.718 "method": "nvmf_set_config", 00:28:03.718 "params": { 00:28:03.718 "discovery_filter": "match_any", 00:28:03.718 "admin_cmd_passthru": { 00:28:03.718 "identify_ctrlr": false 00:28:03.718 } 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_set_max_subsystems", 00:28:03.718 "params": { 00:28:03.718 "max_subsystems": 1024 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_set_crdt", 00:28:03.718 "params": { 00:28:03.718 "crdt1": 0, 00:28:03.718 "crdt2": 0, 00:28:03.718 "crdt3": 0 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_create_transport", 00:28:03.718 "params": { 00:28:03.718 "trtype": "TCP", 00:28:03.718 "max_queue_depth": 128, 00:28:03.718 "max_io_qpairs_per_ctrlr": 127, 00:28:03.718 "in_capsule_data_size": 4096, 00:28:03.718 "max_io_size": 131072, 00:28:03.718 "io_unit_size": 131072, 00:28:03.718 "max_aq_depth": 128, 00:28:03.718 "num_shared_buffers": 511, 00:28:03.718 "buf_cache_size": 4294967295, 00:28:03.718 "dif_insert_or_strip": false, 00:28:03.718 "zcopy": false, 00:28:03.718 "c2h_success": false, 00:28:03.718 "sock_priority": 0, 00:28:03.718 "abort_timeout_sec": 1, 00:28:03.718 "ack_timeout": 0, 00:28:03.718 "data_wr_pool_size": 0 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_create_subsystem", 00:28:03.718 "params": { 00:28:03.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.718 "allow_any_host": false, 00:28:03.718 "serial_number": "SPDK00000000000001", 00:28:03.718 "model_number": "SPDK bdev Controller", 00:28:03.718 "max_namespaces": 10, 00:28:03.718 "min_cntlid": 1, 00:28:03.718 "max_cntlid": 65519, 00:28:03.718 "ana_reporting": false 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_subsystem_add_host", 00:28:03.718 "params": { 00:28:03.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.718 "host": "nqn.2016-06.io.spdk:host1", 00:28:03.718 "psk": "/tmp/tmp.s7tlBSc9pl" 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_subsystem_add_ns", 00:28:03.718 "params": { 00:28:03.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.718 "namespace": { 00:28:03.718 "nsid": 1, 00:28:03.718 "bdev_name": "malloc0", 00:28:03.718 "nguid": "CC7865A8629442CCAD4F47610CAD41ED", 00:28:03.718 "uuid": "cc7865a8-6294-42cc-ad4f-47610cad41ed", 00:28:03.718 "no_auto_visible": false 00:28:03.718 } 00:28:03.718 } 00:28:03.718 }, 00:28:03.718 { 00:28:03.718 "method": "nvmf_subsystem_add_listener", 00:28:03.718 "params": { 00:28:03.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.718 "listen_address": { 00:28:03.718 "trtype": "TCP", 00:28:03.718 "adrfam": "IPv4", 00:28:03.718 "traddr": "10.0.0.2", 00:28:03.718 "trsvcid": "4420" 00:28:03.718 }, 00:28:03.718 "secure_channel": true 00:28:03.718 } 00:28:03.718 } 00:28:03.718 ] 00:28:03.718 } 00:28:03.718 ] 00:28:03.718 }' 00:28:03.718 13:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:28:03.978 13:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:28:03.978 "subsystems": [ 00:28:03.978 { 00:28:03.978 "subsystem": "keyring", 00:28:03.978 "config": [] 00:28:03.978 }, 00:28:03.978 { 00:28:03.978 "subsystem": "iobuf", 00:28:03.978 "config": [ 00:28:03.978 { 00:28:03.978 "method": "iobuf_set_options", 00:28:03.978 "params": { 00:28:03.978 "small_pool_count": 8192, 00:28:03.978 "large_pool_count": 1024, 00:28:03.978 "small_bufsize": 8192, 00:28:03.978 "large_bufsize": 135168 00:28:03.978 } 00:28:03.978 } 00:28:03.978 ] 00:28:03.978 }, 00:28:03.978 { 00:28:03.978 "subsystem": "sock", 00:28:03.978 "config": [ 00:28:03.978 { 00:28:03.978 "method": "sock_set_default_impl", 00:28:03.978 "params": { 00:28:03.978 "impl_name": "posix" 00:28:03.978 } 00:28:03.978 }, 00:28:03.978 { 00:28:03.978 "method": "sock_impl_set_options", 00:28:03.978 "params": { 00:28:03.978 "impl_name": "ssl", 00:28:03.978 "recv_buf_size": 4096, 00:28:03.978 "send_buf_size": 4096, 00:28:03.978 "enable_recv_pipe": true, 00:28:03.978 "enable_quickack": false, 00:28:03.978 "enable_placement_id": 0, 00:28:03.978 "enable_zerocopy_send_server": true, 00:28:03.978 "enable_zerocopy_send_client": false, 00:28:03.978 "zerocopy_threshold": 0, 00:28:03.978 "tls_version": 0, 00:28:03.978 "enable_ktls": false, 00:28:03.978 "enable_new_session_tickets": true 00:28:03.978 } 00:28:03.978 }, 00:28:03.978 { 00:28:03.978 "method": "sock_impl_set_options", 00:28:03.978 "params": { 00:28:03.978 "impl_name": "posix", 00:28:03.978 "recv_buf_size": 2097152, 00:28:03.978 "send_buf_size": 2097152, 00:28:03.978 "enable_recv_pipe": true, 00:28:03.978 "enable_quickack": false, 00:28:03.978 "enable_placement_id": 0, 00:28:03.978 "enable_zerocopy_send_server": true, 00:28:03.978 "enable_zerocopy_send_client": false, 00:28:03.979 "zerocopy_threshold": 0, 00:28:03.979 "tls_version": 0, 00:28:03.979 "enable_ktls": false, 00:28:03.979 "enable_new_session_tickets": false 00:28:03.979 } 00:28:03.979 } 00:28:03.979 ] 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "subsystem": "vmd", 00:28:03.979 "config": [] 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "subsystem": "accel", 00:28:03.979 "config": [ 00:28:03.979 { 00:28:03.979 "method": "accel_set_options", 00:28:03.979 "params": { 00:28:03.979 "small_cache_size": 128, 00:28:03.979 "large_cache_size": 16, 00:28:03.979 "task_count": 2048, 00:28:03.979 "sequence_count": 2048, 00:28:03.979 "buf_count": 2048 00:28:03.979 } 00:28:03.979 } 00:28:03.979 ] 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "subsystem": "bdev", 00:28:03.979 "config": [ 00:28:03.979 { 00:28:03.979 "method": "bdev_set_options", 00:28:03.979 "params": { 00:28:03.979 "bdev_io_pool_size": 65535, 00:28:03.979 "bdev_io_cache_size": 256, 00:28:03.979 "bdev_auto_examine": true, 00:28:03.979 "iobuf_small_cache_size": 128, 00:28:03.979 "iobuf_large_cache_size": 16 00:28:03.979 } 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "method": "bdev_raid_set_options", 00:28:03.979 "params": { 00:28:03.979 "process_window_size_kb": 1024 00:28:03.979 } 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "method": "bdev_iscsi_set_options", 00:28:03.979 "params": { 00:28:03.979 "timeout_sec": 30 00:28:03.979 } 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "method": "bdev_nvme_set_options", 00:28:03.979 "params": { 00:28:03.979 "action_on_timeout": "none", 00:28:03.979 "timeout_us": 0, 00:28:03.979 "timeout_admin_us": 0, 00:28:03.979 "keep_alive_timeout_ms": 10000, 00:28:03.979 "arbitration_burst": 0, 00:28:03.979 "low_priority_weight": 0, 00:28:03.979 "medium_priority_weight": 0, 00:28:03.979 "high_priority_weight": 0, 00:28:03.979 "nvme_adminq_poll_period_us": 10000, 00:28:03.979 "nvme_ioq_poll_period_us": 0, 00:28:03.979 "io_queue_requests": 512, 00:28:03.979 "delay_cmd_submit": true, 00:28:03.979 "transport_retry_count": 4, 00:28:03.979 "bdev_retry_count": 3, 00:28:03.979 "transport_ack_timeout": 0, 00:28:03.979 "ctrlr_loss_timeout_sec": 0, 00:28:03.979 "reconnect_delay_sec": 0, 00:28:03.979 "fast_io_fail_timeout_sec": 0, 00:28:03.979 "disable_auto_failback": false, 00:28:03.979 "generate_uuids": false, 00:28:03.979 "transport_tos": 0, 00:28:03.979 "nvme_error_stat": false, 00:28:03.979 "rdma_srq_size": 0, 00:28:03.979 "io_path_stat": false, 00:28:03.979 "allow_accel_sequence": false, 00:28:03.979 "rdma_max_cq_size": 0, 00:28:03.979 "rdma_cm_event_timeout_ms": 0, 00:28:03.979 "dhchap_digests": [ 00:28:03.979 "sha256", 00:28:03.979 "sha384", 00:28:03.979 "sha512" 00:28:03.979 ], 00:28:03.979 "dhchap_dhgroups": [ 00:28:03.979 "null", 00:28:03.979 "ffdhe2048", 00:28:03.979 "ffdhe3072", 00:28:03.979 "ffdhe4096", 00:28:03.979 "ffdhe6144", 00:28:03.979 "ffdhe8192" 00:28:03.979 ] 00:28:03.979 } 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "method": "bdev_nvme_attach_controller", 00:28:03.979 "params": { 00:28:03.979 "name": "TLSTEST", 00:28:03.979 "trtype": "TCP", 00:28:03.979 "adrfam": "IPv4", 00:28:03.979 "traddr": "10.0.0.2", 00:28:03.979 "trsvcid": "4420", 00:28:03.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.979 "prchk_reftag": false, 00:28:03.979 "prchk_guard": false, 00:28:03.979 "ctrlr_loss_timeout_sec": 0, 00:28:03.979 "reconnect_delay_sec": 0, 00:28:03.979 "fast_io_fail_timeout_sec": 0, 00:28:03.979 "psk": "/tmp/tmp.s7tlBSc9pl", 00:28:03.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.979 "hdgst": false, 00:28:03.979 "ddgst": false 00:28:03.979 } 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "method": "bdev_nvme_set_hotplug", 00:28:03.979 "params": { 00:28:03.979 "period_us": 100000, 00:28:03.979 "enable": false 00:28:03.979 } 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "method": "bdev_wait_for_examine" 00:28:03.979 } 00:28:03.979 ] 00:28:03.979 }, 00:28:03.979 { 00:28:03.979 "subsystem": "nbd", 00:28:03.979 "config": [] 00:28:03.979 } 00:28:03.979 ] 00:28:03.979 }' 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1477745 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1477745 ']' 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1477745 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1477745 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1477745' 00:28:03.979 killing process with pid 1477745 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1477745 00:28:03.979 Received shutdown signal, test time was about 10.000000 seconds 00:28:03.979 00:28:03.979 Latency(us) 00:28:03.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.979 =================================================================================================================== 00:28:03.979 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:03.979 [2024-06-10 13:56:18.332557] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:03.979 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1477745 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1477445 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1477445 ']' 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1477445 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1477445 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1477445' 00:28:04.238 killing process with pid 1477445 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1477445 00:28:04.238 [2024-06-10 13:56:18.574642] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:04.238 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1477445 00:28:04.500 13:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:28:04.500 13:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.500 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:04.500 13:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:28:04.500 "subsystems": [ 00:28:04.500 { 00:28:04.500 "subsystem": "keyring", 00:28:04.500 "config": [] 00:28:04.500 }, 00:28:04.500 { 00:28:04.500 "subsystem": "iobuf", 00:28:04.500 "config": [ 00:28:04.500 { 00:28:04.500 "method": "iobuf_set_options", 00:28:04.500 "params": { 00:28:04.500 "small_pool_count": 8192, 00:28:04.500 "large_pool_count": 1024, 00:28:04.500 "small_bufsize": 8192, 00:28:04.500 "large_bufsize": 135168 00:28:04.500 } 00:28:04.500 } 00:28:04.500 ] 00:28:04.500 }, 00:28:04.500 { 00:28:04.500 "subsystem": "sock", 00:28:04.500 "config": [ 00:28:04.500 { 00:28:04.500 "method": "sock_set_default_impl", 00:28:04.500 "params": { 00:28:04.500 "impl_name": "posix" 00:28:04.500 } 00:28:04.500 }, 00:28:04.500 { 00:28:04.500 "method": "sock_impl_set_options", 00:28:04.500 "params": { 00:28:04.500 "impl_name": "ssl", 00:28:04.500 "recv_buf_size": 4096, 00:28:04.500 "send_buf_size": 4096, 00:28:04.500 "enable_recv_pipe": true, 00:28:04.500 "enable_quickack": false, 00:28:04.500 "enable_placement_id": 0, 00:28:04.500 "enable_zerocopy_send_server": true, 00:28:04.500 "enable_zerocopy_send_client": false, 00:28:04.500 "zerocopy_threshold": 0, 00:28:04.500 "tls_version": 0, 00:28:04.500 "enable_ktls": false, 00:28:04.500 "enable_new_session_tickets": true 00:28:04.500 } 00:28:04.500 }, 00:28:04.500 { 00:28:04.500 "method": "sock_impl_set_options", 00:28:04.500 "params": { 00:28:04.500 "impl_name": "posix", 00:28:04.500 "recv_buf_size": 2097152, 00:28:04.500 "send_buf_size": 2097152, 00:28:04.500 "enable_recv_pipe": true, 00:28:04.500 "enable_quickack": false, 00:28:04.500 "enable_placement_id": 0, 00:28:04.500 "enable_zerocopy_send_server": true, 00:28:04.500 "enable_zerocopy_send_client": false, 00:28:04.500 "zerocopy_threshold": 0, 00:28:04.500 "tls_version": 0, 00:28:04.500 "enable_ktls": false, 00:28:04.500 "enable_new_session_tickets": false 00:28:04.500 } 00:28:04.500 } 00:28:04.500 ] 00:28:04.500 }, 00:28:04.500 { 00:28:04.500 "subsystem": "vmd", 00:28:04.500 "config": [] 00:28:04.500 }, 00:28:04.500 { 00:28:04.500 "subsystem": "accel", 00:28:04.500 "config": [ 00:28:04.500 { 00:28:04.500 "method": "accel_set_options", 00:28:04.501 "params": { 00:28:04.501 "small_cache_size": 128, 00:28:04.501 "large_cache_size": 16, 00:28:04.501 "task_count": 2048, 00:28:04.501 "sequence_count": 2048, 00:28:04.501 "buf_count": 2048 00:28:04.501 } 00:28:04.501 } 00:28:04.501 ] 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "subsystem": "bdev", 00:28:04.501 "config": [ 00:28:04.501 { 00:28:04.501 "method": "bdev_set_options", 00:28:04.501 "params": { 00:28:04.501 "bdev_io_pool_size": 65535, 00:28:04.501 "bdev_io_cache_size": 256, 00:28:04.501 "bdev_auto_examine": true, 00:28:04.501 "iobuf_small_cache_size": 128, 00:28:04.501 "iobuf_large_cache_size": 16 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "bdev_raid_set_options", 00:28:04.501 "params": { 00:28:04.501 "process_window_size_kb": 1024 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "bdev_iscsi_set_options", 00:28:04.501 "params": { 00:28:04.501 "timeout_sec": 30 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "bdev_nvme_set_options", 00:28:04.501 "params": { 00:28:04.501 "action_on_timeout": "none", 00:28:04.501 "timeout_us": 0, 00:28:04.501 "timeout_admin_us": 0, 00:28:04.501 "keep_alive_timeout_ms": 10000, 00:28:04.501 "arbitration_burst": 0, 00:28:04.501 "low_priority_weight": 0, 00:28:04.501 "medium_priority_weight": 0, 00:28:04.501 "high_priority_weight": 0, 00:28:04.501 "nvme_adminq_poll_period_us": 10000, 00:28:04.501 "nvme_ioq_poll_period_us": 0, 00:28:04.501 "io_queue_requests": 0, 00:28:04.501 "delay_cmd_submit": true, 00:28:04.501 "transport_retry_count": 4, 00:28:04.501 "bdev_retry_count": 3, 00:28:04.501 "transport_ack_timeout": 0, 00:28:04.501 "ctrlr_loss_timeout_sec": 0, 00:28:04.501 "reconnect_delay_sec": 0, 00:28:04.501 "fast_io_fail_timeout_sec": 0, 00:28:04.501 "disable_auto_failback": false, 00:28:04.501 "generate_uuids": false, 00:28:04.501 "transport_tos": 0, 00:28:04.501 "nvme_error_stat": false, 00:28:04.501 "rdma_srq_size": 0, 00:28:04.501 "io_path_stat": false, 00:28:04.501 "allow_accel_sequence": false, 00:28:04.501 "rdma_max_cq_size": 0, 00:28:04.501 "rdma_cm_event_timeout_ms": 0, 00:28:04.501 "dhchap_digests": [ 00:28:04.501 "sha256", 00:28:04.501 "sha384", 00:28:04.501 "sha512" 00:28:04.501 ], 00:28:04.501 "dhchap_dhgroups": [ 00:28:04.501 "null", 00:28:04.501 "ffdhe2048", 00:28:04.501 "ffdhe3072", 00:28:04.501 "ffdhe4096", 00:28:04.501 "ffdhe6144", 00:28:04.501 "ffdhe8192" 00:28:04.501 ] 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "bdev_nvme_set_hotplug", 00:28:04.501 "params": { 00:28:04.501 "period_us": 100000, 00:28:04.501 "enable": false 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "bdev_malloc_create", 00:28:04.501 "params": { 00:28:04.501 "name": "malloc0", 00:28:04.501 "num_blocks": 8192, 00:28:04.501 "block_size": 4096, 00:28:04.501 "physical_block_size": 4096, 00:28:04.501 "uuid": "cc7865a8-6294-42cc-ad4f-47610cad41ed", 00:28:04.501 "optimal_io_boundary": 0 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "bdev_wait_for_examine" 00:28:04.501 } 00:28:04.501 ] 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "subsystem": "nbd", 00:28:04.501 "config": [] 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "subsystem": "scheduler", 00:28:04.501 "config": [ 00:28:04.501 { 00:28:04.501 "method": "framework_set_scheduler", 00:28:04.501 "params": { 00:28:04.501 "name": "static" 00:28:04.501 } 00:28:04.501 } 00:28:04.501 ] 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "subsystem": "nvmf", 00:28:04.501 "config": [ 00:28:04.501 { 00:28:04.501 "method": "nvmf_set_config", 00:28:04.501 "params": { 00:28:04.501 "discovery_filter": "match_any", 00:28:04.501 "admin_cmd_passthru": { 00:28:04.501 "identify_ctrlr": false 00:28:04.501 } 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_set_max_subsystems", 00:28:04.501 "params": { 00:28:04.501 "max_subsystems": 1024 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_set_crdt", 00:28:04.501 "params": { 00:28:04.501 "crdt1": 0, 00:28:04.501 "crdt2": 0, 00:28:04.501 "crdt3": 0 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_create_transport", 00:28:04.501 "params": { 00:28:04.501 "trtype": "TCP", 00:28:04.501 "max_queue_depth": 128, 00:28:04.501 "max_io_qpairs_per_ctrlr": 127, 00:28:04.501 "in_capsule_data_size": 4096, 00:28:04.501 "max_io_size": 131072, 00:28:04.501 "io_unit_size": 131072, 00:28:04.501 "max_aq_depth": 128, 00:28:04.501 "num_shared_buffers": 511, 00:28:04.501 "buf_cache_size": 4294967295, 00:28:04.501 "dif_insert_or_strip": false, 00:28:04.501 "zcopy": false, 00:28:04.501 "c2h_success": false, 00:28:04.501 "sock_priority": 0, 00:28:04.501 "abort_timeout_sec": 1, 00:28:04.501 "ack_timeout": 0, 00:28:04.501 "data_wr_pool_size": 0 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_create_subsystem", 00:28:04.501 "params": { 00:28:04.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.501 "allow_any_host": false, 00:28:04.501 "serial_number": "SPDK00000000000001", 00:28:04.501 "model_number": "SPDK bdev Controller", 00:28:04.501 "max_namespaces": 10, 00:28:04.501 "min_cntlid": 1, 00:28:04.501 "max_cntlid": 65519, 00:28:04.501 "ana_reporting": false 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_subsystem_add_host", 00:28:04.501 "params": { 00:28:04.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.501 "host": "nqn.2016-06.io.spdk:host1", 00:28:04.501 "psk": "/tmp/tmp.s7tlBSc9pl" 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_subsystem_add_ns", 00:28:04.501 "params": { 00:28:04.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.501 "namespace": { 00:28:04.501 "nsid": 1, 00:28:04.501 "bdev_name": "malloc0", 00:28:04.501 "nguid": "CC7865A8629442CCAD4F47610CAD41ED", 00:28:04.501 "uuid": "cc7865a8-6294-42cc-ad4f-47610cad41ed", 00:28:04.501 "no_auto_visible": false 00:28:04.501 } 00:28:04.501 } 00:28:04.501 }, 00:28:04.501 { 00:28:04.501 "method": "nvmf_subsystem_add_listener", 00:28:04.501 "params": { 00:28:04.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.501 "listen_address": { 00:28:04.501 "trtype": "TCP", 00:28:04.501 "adrfam": "IPv4", 00:28:04.501 "traddr": "10.0.0.2", 00:28:04.501 "trsvcid": "4420" 00:28:04.501 }, 00:28:04.501 "secure_channel": true 00:28:04.501 } 00:28:04.501 } 00:28:04.501 ] 00:28:04.501 } 00:28:04.501 ] 00:28:04.501 }' 00:28:04.501 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:04.501 13:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1478259 00:28:04.501 13:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1478259 00:28:04.501 13:56:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:28:04.501 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1478259 ']' 00:28:04.501 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.502 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:04.502 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.502 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:04.502 13:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:04.502 [2024-06-10 13:56:18.852227] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:04.502 [2024-06-10 13:56:18.852292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.502 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.502 [2024-06-10 13:56:18.967848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.801 [2024-06-10 13:56:19.053795] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.802 [2024-06-10 13:56:19.053838] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.802 [2024-06-10 13:56:19.053851] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.802 [2024-06-10 13:56:19.053863] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.802 [2024-06-10 13:56:19.053873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.802 [2024-06-10 13:56:19.053947] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.802 [2024-06-10 13:56:19.262368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.084 [2024-06-10 13:56:19.278320] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:05.084 [2024-06-10 13:56:19.294374] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:05.084 [2024-06-10 13:56:19.306910] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1478322 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1478322 /var/tmp/bdevperf.sock 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1478322 ']' 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:05.343 13:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:28:05.343 "subsystems": [ 00:28:05.343 { 00:28:05.343 "subsystem": "keyring", 00:28:05.343 "config": [] 00:28:05.343 }, 00:28:05.343 { 00:28:05.343 "subsystem": "iobuf", 00:28:05.343 "config": [ 00:28:05.343 { 00:28:05.343 "method": "iobuf_set_options", 00:28:05.343 "params": { 00:28:05.343 "small_pool_count": 8192, 00:28:05.343 "large_pool_count": 1024, 00:28:05.343 "small_bufsize": 8192, 00:28:05.343 "large_bufsize": 135168 00:28:05.343 } 00:28:05.343 } 00:28:05.343 ] 00:28:05.343 }, 00:28:05.343 { 00:28:05.343 "subsystem": "sock", 00:28:05.343 "config": [ 00:28:05.343 { 00:28:05.343 "method": "sock_set_default_impl", 00:28:05.343 "params": { 00:28:05.343 "impl_name": "posix" 00:28:05.343 } 00:28:05.343 }, 00:28:05.343 { 00:28:05.343 "method": "sock_impl_set_options", 00:28:05.343 "params": { 00:28:05.343 "impl_name": "ssl", 00:28:05.343 "recv_buf_size": 4096, 00:28:05.343 "send_buf_size": 4096, 00:28:05.343 "enable_recv_pipe": true, 00:28:05.343 "enable_quickack": false, 00:28:05.343 "enable_placement_id": 0, 00:28:05.343 "enable_zerocopy_send_server": true, 00:28:05.343 "enable_zerocopy_send_client": false, 00:28:05.343 "zerocopy_threshold": 0, 00:28:05.343 "tls_version": 0, 00:28:05.343 "enable_ktls": false, 00:28:05.343 "enable_new_session_tickets": true 00:28:05.343 } 00:28:05.343 }, 00:28:05.343 { 00:28:05.343 "method": "sock_impl_set_options", 00:28:05.343 "params": { 00:28:05.343 "impl_name": "posix", 00:28:05.343 "recv_buf_size": 2097152, 00:28:05.343 "send_buf_size": 2097152, 00:28:05.343 "enable_recv_pipe": true, 00:28:05.343 "enable_quickack": false, 00:28:05.343 "enable_placement_id": 0, 00:28:05.343 "enable_zerocopy_send_server": true, 00:28:05.343 "enable_zerocopy_send_client": false, 00:28:05.343 "zerocopy_threshold": 0, 00:28:05.343 "tls_version": 0, 00:28:05.343 "enable_ktls": false, 00:28:05.343 "enable_new_session_tickets": false 00:28:05.343 } 00:28:05.343 } 00:28:05.343 ] 00:28:05.343 }, 00:28:05.343 { 00:28:05.343 "subsystem": "vmd", 00:28:05.343 "config": [] 00:28:05.343 }, 00:28:05.343 { 00:28:05.343 "subsystem": "accel", 00:28:05.343 "config": [ 00:28:05.343 { 00:28:05.343 "method": "accel_set_options", 00:28:05.343 "params": { 00:28:05.343 "small_cache_size": 128, 00:28:05.344 "large_cache_size": 16, 00:28:05.344 "task_count": 2048, 00:28:05.344 "sequence_count": 2048, 00:28:05.344 "buf_count": 2048 00:28:05.344 } 00:28:05.344 } 00:28:05.344 ] 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "subsystem": "bdev", 00:28:05.344 "config": [ 00:28:05.344 { 00:28:05.344 "method": "bdev_set_options", 00:28:05.344 "params": { 00:28:05.344 "bdev_io_pool_size": 65535, 00:28:05.344 "bdev_io_cache_size": 256, 00:28:05.344 "bdev_auto_examine": true, 00:28:05.344 "iobuf_small_cache_size": 128, 00:28:05.344 "iobuf_large_cache_size": 16 00:28:05.344 } 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "method": "bdev_raid_set_options", 00:28:05.344 "params": { 00:28:05.344 "process_window_size_kb": 1024 00:28:05.344 } 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "method": "bdev_iscsi_set_options", 00:28:05.344 "params": { 00:28:05.344 "timeout_sec": 30 00:28:05.344 } 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "method": "bdev_nvme_set_options", 00:28:05.344 "params": { 00:28:05.344 "action_on_timeout": "none", 00:28:05.344 "timeout_us": 0, 00:28:05.344 "timeout_admin_us": 0, 00:28:05.344 "keep_alive_timeout_ms": 10000, 00:28:05.344 "arbitration_burst": 0, 00:28:05.344 "low_priority_weight": 0, 00:28:05.344 "medium_priority_weight": 0, 00:28:05.344 "high_priority_weight": 0, 00:28:05.344 "nvme_adminq_poll_period_us": 10000, 00:28:05.344 "nvme_ioq_poll_period_us": 0, 00:28:05.344 "io_queue_requests": 512, 00:28:05.344 "delay_cmd_submit": true, 00:28:05.344 "transport_retry_count": 4, 00:28:05.344 "bdev_retry_count": 3, 00:28:05.344 "transport_ack_timeout": 0, 00:28:05.344 "ctrlr_loss_timeout_sec": 0, 00:28:05.344 "reconnect_delay_sec": 0, 00:28:05.344 "fast_io_fail_timeout_sec": 0, 00:28:05.344 "disable_auto_failback": false, 00:28:05.344 "generate_uuids": false, 00:28:05.344 "transport_tos": 0, 00:28:05.344 "nvme_error_stat": false, 00:28:05.344 "rdma_srq_size": 0, 00:28:05.344 "io_path_stat": false, 00:28:05.344 "allow_accel_sequence": false, 00:28:05.344 "rdma_max_cq_size": 0, 00:28:05.344 "rdma_cm_event_timeout_ms": 0, 00:28:05.344 "dhchap_digests": [ 00:28:05.344 "sha256", 00:28:05.344 "sha384", 00:28:05.344 "sha512" 00:28:05.344 ], 00:28:05.344 "dhchap_dhgroups": [ 00:28:05.344 "null", 00:28:05.344 "ffdhe2048", 00:28:05.344 "ffdhe3072", 00:28:05.344 "ffdhe4096", 00:28:05.344 "ffdhe6144", 00:28:05.344 "ffdhe8192" 00:28:05.344 ] 00:28:05.344 } 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "method": "bdev_nvme_attach_controller", 00:28:05.344 "params": { 00:28:05.344 "name": "TLSTEST", 00:28:05.344 "trtype": "TCP", 00:28:05.344 "adrfam": "IPv4", 00:28:05.344 "traddr": "10.0.0.2", 00:28:05.344 "trsvcid": "4420", 00:28:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.344 "prchk_reftag": false, 00:28:05.344 "prchk_guard": false, 00:28:05.344 "ctrlr_loss_timeout_sec": 0, 00:28:05.344 "reconnect_delay_sec": 0, 00:28:05.344 "fast_io_fail_timeout_sec": 0, 00:28:05.344 "psk": "/tmp/tmp.s7tlBSc9pl", 00:28:05.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.344 "hdgst": false, 00:28:05.344 "ddgst": false 00:28:05.344 } 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "method": "bdev_nvme_set_hotplug", 00:28:05.344 "params": { 00:28:05.344 "period_us": 100000, 00:28:05.344 "enable": false 00:28:05.344 } 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "method": "bdev_wait_for_examine" 00:28:05.344 } 00:28:05.344 ] 00:28:05.344 }, 00:28:05.344 { 00:28:05.344 "subsystem": "nbd", 00:28:05.344 "config": [] 00:28:05.344 } 00:28:05.344 ] 00:28:05.344 }' 00:28:05.344 13:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:05.603 [2024-06-10 13:56:19.847192] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:05.603 [2024-06-10 13:56:19.847258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478322 ] 00:28:05.603 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.603 [2024-06-10 13:56:19.942999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.603 [2024-06-10 13:56:20.017353] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.861 [2024-06-10 13:56:20.160661] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:05.861 [2024-06-10 13:56:20.160762] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:06.428 13:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:06.429 13:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:06.429 13:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:28:06.429 Running I/O for 10 seconds... 00:28:18.639 00:28:18.639 Latency(us) 00:28:18.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.639 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:18.639 Verification LBA range: start 0x0 length 0x2000 00:28:18.639 TLSTESTn1 : 10.02 4876.06 19.05 0.00 0.00 26202.62 6763.32 66689.43 00:28:18.639 =================================================================================================================== 00:28:18.639 Total : 4876.06 19.05 0.00 0.00 26202.62 6763.32 66689.43 00:28:18.639 0 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1478322 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1478322 ']' 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1478322 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1478322 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1478322' 00:28:18.639 killing process with pid 1478322 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1478322 00:28:18.639 Received shutdown signal, test time was about 10.000000 seconds 00:28:18.639 00:28:18.639 Latency(us) 00:28:18.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.639 =================================================================================================================== 00:28:18.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.639 [2024-06-10 13:56:30.996363] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:18.639 13:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1478322 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1478259 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1478259 ']' 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1478259 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1478259 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1478259' 00:28:18.639 killing process with pid 1478259 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1478259 00:28:18.639 [2024-06-10 13:56:31.238790] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1478259 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1480255 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1480255 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1480255 ']' 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:18.639 13:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:18.639 [2024-06-10 13:56:31.509289] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:18.639 [2024-06-10 13:56:31.509352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.639 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.639 [2024-06-10 13:56:31.638029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.639 [2024-06-10 13:56:31.721640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.639 [2024-06-10 13:56:31.721686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.639 [2024-06-10 13:56:31.721700] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.639 [2024-06-10 13:56:31.721712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.639 [2024-06-10 13:56:31.721722] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.640 [2024-06-10 13:56:31.721749] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.s7tlBSc9pl 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s7tlBSc9pl 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:18.640 [2024-06-10 13:56:32.669709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:18.640 13:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:18.898 [2024-06-10 13:56:33.126895] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:18.898 [2024-06-10 13:56:33.127132] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.898 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:19.156 malloc0 00:28:19.156 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:19.156 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s7tlBSc9pl 00:28:19.415 [2024-06-10 13:56:33.814017] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1480741 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1480741 /var/tmp/bdevperf.sock 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1480741 ']' 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:19.415 13:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:19.415 [2024-06-10 13:56:33.886100] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:19.415 [2024-06-10 13:56:33.886167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480741 ] 00:28:19.674 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.674 [2024-06-10 13:56:33.996743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.674 [2024-06-10 13:56:34.079315] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.608 13:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:20.608 13:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:20.608 13:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.s7tlBSc9pl 00:28:20.609 13:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:28:20.867 [2024-06-10 13:56:35.224632] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:20.867 nvme0n1 00:28:20.867 13:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:21.125 Running I/O for 1 seconds... 00:28:22.061 00:28:22.061 Latency(us) 00:28:22.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.061 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:22.061 Verification LBA range: start 0x0 length 0x2000 00:28:22.061 nvme0n1 : 1.02 3953.11 15.44 0.00 0.00 32039.15 6474.96 70044.88 00:28:22.061 =================================================================================================================== 00:28:22.061 Total : 3953.11 15.44 0.00 0.00 32039.15 6474.96 70044.88 00:28:22.061 0 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1480741 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1480741 ']' 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1480741 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1480741 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1480741' 00:28:22.061 killing process with pid 1480741 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1480741 00:28:22.061 Received shutdown signal, test time was about 1.000000 seconds 00:28:22.061 00:28:22.061 Latency(us) 00:28:22.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.061 =================================================================================================================== 00:28:22.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.061 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1480741 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1480255 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1480255 ']' 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1480255 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1480255 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1480255' 00:28:22.320 killing process with pid 1480255 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1480255 00:28:22.320 [2024-06-10 13:56:36.775942] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:22.320 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1480255 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1481286 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1481286 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1481286 ']' 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:22.580 13:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:22.580 [2024-06-10 13:56:37.048697] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:22.580 [2024-06-10 13:56:37.048760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.839 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.839 [2024-06-10 13:56:37.177564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.839 [2024-06-10 13:56:37.255564] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.839 [2024-06-10 13:56:37.255617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.839 [2024-06-10 13:56:37.255631] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.839 [2024-06-10 13:56:37.255643] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.839 [2024-06-10 13:56:37.255654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.839 [2024-06-10 13:56:37.255685] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.777 13:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:23.777 13:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:23.777 13:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.777 13:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:23.777 13:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:23.777 [2024-06-10 13:56:38.008345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.777 malloc0 00:28:23.777 [2024-06-10 13:56:38.037560] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:23.777 [2024-06-10 13:56:38.037803] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1481543 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1481543 /var/tmp/bdevperf.sock 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1481543 ']' 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:23.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:23.777 13:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:23.777 [2024-06-10 13:56:38.115426] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:23.777 [2024-06-10 13:56:38.115485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481543 ] 00:28:23.777 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.777 [2024-06-10 13:56:38.224895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.037 [2024-06-10 13:56:38.312278] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.606 13:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:24.606 13:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:24.606 13:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.s7tlBSc9pl 00:28:24.865 13:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:28:25.123 [2024-06-10 13:56:39.484266] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:25.123 nvme0n1 00:28:25.123 13:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:25.381 Running I/O for 1 seconds... 00:28:26.316 00:28:26.316 Latency(us) 00:28:26.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.316 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:26.316 Verification LBA range: start 0x0 length 0x2000 00:28:26.316 nvme0n1 : 1.05 3490.07 13.63 0.00 0.00 35900.72 6553.60 74239.18 00:28:26.316 =================================================================================================================== 00:28:26.316 Total : 3490.07 13.63 0.00 0.00 35900.72 6553.60 74239.18 00:28:26.316 0 00:28:26.316 13:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:28:26.316 13:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.316 13:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:26.575 13:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.575 13:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:28:26.575 "subsystems": [ 00:28:26.575 { 00:28:26.575 "subsystem": "keyring", 00:28:26.575 "config": [ 00:28:26.575 { 00:28:26.575 "method": "keyring_file_add_key", 00:28:26.575 "params": { 00:28:26.575 "name": "key0", 00:28:26.575 "path": "/tmp/tmp.s7tlBSc9pl" 00:28:26.575 } 00:28:26.575 } 00:28:26.575 ] 00:28:26.575 }, 00:28:26.575 { 00:28:26.575 "subsystem": "iobuf", 00:28:26.575 "config": [ 00:28:26.575 { 00:28:26.575 "method": "iobuf_set_options", 00:28:26.575 "params": { 00:28:26.575 "small_pool_count": 8192, 00:28:26.575 "large_pool_count": 1024, 00:28:26.575 "small_bufsize": 8192, 00:28:26.575 "large_bufsize": 135168 00:28:26.575 } 00:28:26.575 } 00:28:26.575 ] 00:28:26.575 }, 00:28:26.575 { 00:28:26.575 "subsystem": "sock", 00:28:26.575 "config": [ 00:28:26.575 { 00:28:26.575 "method": "sock_set_default_impl", 00:28:26.575 "params": { 00:28:26.575 "impl_name": "posix" 00:28:26.575 } 00:28:26.575 }, 00:28:26.575 { 00:28:26.575 "method": "sock_impl_set_options", 00:28:26.575 "params": { 00:28:26.575 "impl_name": "ssl", 00:28:26.575 "recv_buf_size": 4096, 00:28:26.575 "send_buf_size": 4096, 00:28:26.575 "enable_recv_pipe": true, 00:28:26.575 "enable_quickack": false, 00:28:26.575 "enable_placement_id": 0, 00:28:26.575 "enable_zerocopy_send_server": true, 00:28:26.575 "enable_zerocopy_send_client": false, 00:28:26.575 "zerocopy_threshold": 0, 00:28:26.575 "tls_version": 0, 00:28:26.575 "enable_ktls": false, 00:28:26.575 "enable_new_session_tickets": true 00:28:26.575 } 00:28:26.575 }, 00:28:26.575 { 00:28:26.575 "method": "sock_impl_set_options", 00:28:26.575 "params": { 00:28:26.575 "impl_name": "posix", 00:28:26.575 "recv_buf_size": 2097152, 00:28:26.575 "send_buf_size": 2097152, 00:28:26.576 "enable_recv_pipe": true, 00:28:26.576 "enable_quickack": false, 00:28:26.576 "enable_placement_id": 0, 00:28:26.576 "enable_zerocopy_send_server": true, 00:28:26.576 "enable_zerocopy_send_client": false, 00:28:26.576 "zerocopy_threshold": 0, 00:28:26.576 "tls_version": 0, 00:28:26.576 "enable_ktls": false, 00:28:26.576 "enable_new_session_tickets": false 00:28:26.576 } 00:28:26.576 } 00:28:26.576 ] 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "subsystem": "vmd", 00:28:26.576 "config": [] 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "subsystem": "accel", 00:28:26.576 "config": [ 00:28:26.576 { 00:28:26.576 "method": "accel_set_options", 00:28:26.576 "params": { 00:28:26.576 "small_cache_size": 128, 00:28:26.576 "large_cache_size": 16, 00:28:26.576 "task_count": 2048, 00:28:26.576 "sequence_count": 2048, 00:28:26.576 "buf_count": 2048 00:28:26.576 } 00:28:26.576 } 00:28:26.576 ] 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "subsystem": "bdev", 00:28:26.576 "config": [ 00:28:26.576 { 00:28:26.576 "method": "bdev_set_options", 00:28:26.576 "params": { 00:28:26.576 "bdev_io_pool_size": 65535, 00:28:26.576 "bdev_io_cache_size": 256, 00:28:26.576 "bdev_auto_examine": true, 00:28:26.576 "iobuf_small_cache_size": 128, 00:28:26.576 "iobuf_large_cache_size": 16 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "bdev_raid_set_options", 00:28:26.576 "params": { 00:28:26.576 "process_window_size_kb": 1024 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "bdev_iscsi_set_options", 00:28:26.576 "params": { 00:28:26.576 "timeout_sec": 30 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "bdev_nvme_set_options", 00:28:26.576 "params": { 00:28:26.576 "action_on_timeout": "none", 00:28:26.576 "timeout_us": 0, 00:28:26.576 "timeout_admin_us": 0, 00:28:26.576 "keep_alive_timeout_ms": 10000, 00:28:26.576 "arbitration_burst": 0, 00:28:26.576 "low_priority_weight": 0, 00:28:26.576 "medium_priority_weight": 0, 00:28:26.576 "high_priority_weight": 0, 00:28:26.576 "nvme_adminq_poll_period_us": 10000, 00:28:26.576 "nvme_ioq_poll_period_us": 0, 00:28:26.576 "io_queue_requests": 0, 00:28:26.576 "delay_cmd_submit": true, 00:28:26.576 "transport_retry_count": 4, 00:28:26.576 "bdev_retry_count": 3, 00:28:26.576 "transport_ack_timeout": 0, 00:28:26.576 "ctrlr_loss_timeout_sec": 0, 00:28:26.576 "reconnect_delay_sec": 0, 00:28:26.576 "fast_io_fail_timeout_sec": 0, 00:28:26.576 "disable_auto_failback": false, 00:28:26.576 "generate_uuids": false, 00:28:26.576 "transport_tos": 0, 00:28:26.576 "nvme_error_stat": false, 00:28:26.576 "rdma_srq_size": 0, 00:28:26.576 "io_path_stat": false, 00:28:26.576 "allow_accel_sequence": false, 00:28:26.576 "rdma_max_cq_size": 0, 00:28:26.576 "rdma_cm_event_timeout_ms": 0, 00:28:26.576 "dhchap_digests": [ 00:28:26.576 "sha256", 00:28:26.576 "sha384", 00:28:26.576 "sha512" 00:28:26.576 ], 00:28:26.576 "dhchap_dhgroups": [ 00:28:26.576 "null", 00:28:26.576 "ffdhe2048", 00:28:26.576 "ffdhe3072", 00:28:26.576 "ffdhe4096", 00:28:26.576 "ffdhe6144", 00:28:26.576 "ffdhe8192" 00:28:26.576 ] 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "bdev_nvme_set_hotplug", 00:28:26.576 "params": { 00:28:26.576 "period_us": 100000, 00:28:26.576 "enable": false 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "bdev_malloc_create", 00:28:26.576 "params": { 00:28:26.576 "name": "malloc0", 00:28:26.576 "num_blocks": 8192, 00:28:26.576 "block_size": 4096, 00:28:26.576 "physical_block_size": 4096, 00:28:26.576 "uuid": "8cd5a1b5-a01e-4d17-bbfa-dca90d024189", 00:28:26.576 "optimal_io_boundary": 0 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "bdev_wait_for_examine" 00:28:26.576 } 00:28:26.576 ] 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "subsystem": "nbd", 00:28:26.576 "config": [] 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "subsystem": "scheduler", 00:28:26.576 "config": [ 00:28:26.576 { 00:28:26.576 "method": "framework_set_scheduler", 00:28:26.576 "params": { 00:28:26.576 "name": "static" 00:28:26.576 } 00:28:26.576 } 00:28:26.576 ] 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "subsystem": "nvmf", 00:28:26.576 "config": [ 00:28:26.576 { 00:28:26.576 "method": "nvmf_set_config", 00:28:26.576 "params": { 00:28:26.576 "discovery_filter": "match_any", 00:28:26.576 "admin_cmd_passthru": { 00:28:26.576 "identify_ctrlr": false 00:28:26.576 } 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_set_max_subsystems", 00:28:26.576 "params": { 00:28:26.576 "max_subsystems": 1024 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_set_crdt", 00:28:26.576 "params": { 00:28:26.576 "crdt1": 0, 00:28:26.576 "crdt2": 0, 00:28:26.576 "crdt3": 0 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_create_transport", 00:28:26.576 "params": { 00:28:26.576 "trtype": "TCP", 00:28:26.576 "max_queue_depth": 128, 00:28:26.576 "max_io_qpairs_per_ctrlr": 127, 00:28:26.576 "in_capsule_data_size": 4096, 00:28:26.576 "max_io_size": 131072, 00:28:26.576 "io_unit_size": 131072, 00:28:26.576 "max_aq_depth": 128, 00:28:26.576 "num_shared_buffers": 511, 00:28:26.576 "buf_cache_size": 4294967295, 00:28:26.576 "dif_insert_or_strip": false, 00:28:26.576 "zcopy": false, 00:28:26.576 "c2h_success": false, 00:28:26.576 "sock_priority": 0, 00:28:26.576 "abort_timeout_sec": 1, 00:28:26.576 "ack_timeout": 0, 00:28:26.576 "data_wr_pool_size": 0 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_create_subsystem", 00:28:26.576 "params": { 00:28:26.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.576 "allow_any_host": false, 00:28:26.576 "serial_number": "00000000000000000000", 00:28:26.576 "model_number": "SPDK bdev Controller", 00:28:26.576 "max_namespaces": 32, 00:28:26.576 "min_cntlid": 1, 00:28:26.576 "max_cntlid": 65519, 00:28:26.576 "ana_reporting": false 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_subsystem_add_host", 00:28:26.576 "params": { 00:28:26.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.576 "host": "nqn.2016-06.io.spdk:host1", 00:28:26.576 "psk": "key0" 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_subsystem_add_ns", 00:28:26.576 "params": { 00:28:26.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.576 "namespace": { 00:28:26.576 "nsid": 1, 00:28:26.576 "bdev_name": "malloc0", 00:28:26.576 "nguid": "8CD5A1B5A01E4D17BBFADCA90D024189", 00:28:26.576 "uuid": "8cd5a1b5-a01e-4d17-bbfa-dca90d024189", 00:28:26.576 "no_auto_visible": false 00:28:26.576 } 00:28:26.576 } 00:28:26.576 }, 00:28:26.576 { 00:28:26.576 "method": "nvmf_subsystem_add_listener", 00:28:26.576 "params": { 00:28:26.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.576 "listen_address": { 00:28:26.576 "trtype": "TCP", 00:28:26.576 "adrfam": "IPv4", 00:28:26.576 "traddr": "10.0.0.2", 00:28:26.576 "trsvcid": "4420" 00:28:26.576 }, 00:28:26.576 "secure_channel": true 00:28:26.576 } 00:28:26.576 } 00:28:26.576 ] 00:28:26.576 } 00:28:26.576 ] 00:28:26.576 }' 00:28:26.576 13:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:28:26.836 13:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:28:26.836 "subsystems": [ 00:28:26.836 { 00:28:26.836 "subsystem": "keyring", 00:28:26.836 "config": [ 00:28:26.836 { 00:28:26.836 "method": "keyring_file_add_key", 00:28:26.836 "params": { 00:28:26.836 "name": "key0", 00:28:26.836 "path": "/tmp/tmp.s7tlBSc9pl" 00:28:26.836 } 00:28:26.836 } 00:28:26.836 ] 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "subsystem": "iobuf", 00:28:26.836 "config": [ 00:28:26.836 { 00:28:26.836 "method": "iobuf_set_options", 00:28:26.836 "params": { 00:28:26.836 "small_pool_count": 8192, 00:28:26.836 "large_pool_count": 1024, 00:28:26.836 "small_bufsize": 8192, 00:28:26.836 "large_bufsize": 135168 00:28:26.836 } 00:28:26.836 } 00:28:26.836 ] 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "subsystem": "sock", 00:28:26.836 "config": [ 00:28:26.836 { 00:28:26.836 "method": "sock_set_default_impl", 00:28:26.836 "params": { 00:28:26.836 "impl_name": "posix" 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "sock_impl_set_options", 00:28:26.836 "params": { 00:28:26.836 "impl_name": "ssl", 00:28:26.836 "recv_buf_size": 4096, 00:28:26.836 "send_buf_size": 4096, 00:28:26.836 "enable_recv_pipe": true, 00:28:26.836 "enable_quickack": false, 00:28:26.836 "enable_placement_id": 0, 00:28:26.836 "enable_zerocopy_send_server": true, 00:28:26.836 "enable_zerocopy_send_client": false, 00:28:26.836 "zerocopy_threshold": 0, 00:28:26.836 "tls_version": 0, 00:28:26.836 "enable_ktls": false, 00:28:26.836 "enable_new_session_tickets": true 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "sock_impl_set_options", 00:28:26.836 "params": { 00:28:26.836 "impl_name": "posix", 00:28:26.836 "recv_buf_size": 2097152, 00:28:26.836 "send_buf_size": 2097152, 00:28:26.836 "enable_recv_pipe": true, 00:28:26.836 "enable_quickack": false, 00:28:26.836 "enable_placement_id": 0, 00:28:26.836 "enable_zerocopy_send_server": true, 00:28:26.836 "enable_zerocopy_send_client": false, 00:28:26.836 "zerocopy_threshold": 0, 00:28:26.836 "tls_version": 0, 00:28:26.836 "enable_ktls": false, 00:28:26.836 "enable_new_session_tickets": false 00:28:26.836 } 00:28:26.836 } 00:28:26.836 ] 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "subsystem": "vmd", 00:28:26.836 "config": [] 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "subsystem": "accel", 00:28:26.836 "config": [ 00:28:26.836 { 00:28:26.836 "method": "accel_set_options", 00:28:26.836 "params": { 00:28:26.836 "small_cache_size": 128, 00:28:26.836 "large_cache_size": 16, 00:28:26.836 "task_count": 2048, 00:28:26.836 "sequence_count": 2048, 00:28:26.836 "buf_count": 2048 00:28:26.836 } 00:28:26.836 } 00:28:26.836 ] 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "subsystem": "bdev", 00:28:26.836 "config": [ 00:28:26.836 { 00:28:26.836 "method": "bdev_set_options", 00:28:26.836 "params": { 00:28:26.836 "bdev_io_pool_size": 65535, 00:28:26.836 "bdev_io_cache_size": 256, 00:28:26.836 "bdev_auto_examine": true, 00:28:26.836 "iobuf_small_cache_size": 128, 00:28:26.836 "iobuf_large_cache_size": 16 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "bdev_raid_set_options", 00:28:26.836 "params": { 00:28:26.836 "process_window_size_kb": 1024 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "bdev_iscsi_set_options", 00:28:26.836 "params": { 00:28:26.836 "timeout_sec": 30 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "bdev_nvme_set_options", 00:28:26.836 "params": { 00:28:26.836 "action_on_timeout": "none", 00:28:26.836 "timeout_us": 0, 00:28:26.836 "timeout_admin_us": 0, 00:28:26.836 "keep_alive_timeout_ms": 10000, 00:28:26.836 "arbitration_burst": 0, 00:28:26.836 "low_priority_weight": 0, 00:28:26.836 "medium_priority_weight": 0, 00:28:26.836 "high_priority_weight": 0, 00:28:26.836 "nvme_adminq_poll_period_us": 10000, 00:28:26.836 "nvme_ioq_poll_period_us": 0, 00:28:26.836 "io_queue_requests": 512, 00:28:26.836 "delay_cmd_submit": true, 00:28:26.836 "transport_retry_count": 4, 00:28:26.836 "bdev_retry_count": 3, 00:28:26.836 "transport_ack_timeout": 0, 00:28:26.836 "ctrlr_loss_timeout_sec": 0, 00:28:26.836 "reconnect_delay_sec": 0, 00:28:26.836 "fast_io_fail_timeout_sec": 0, 00:28:26.836 "disable_auto_failback": false, 00:28:26.836 "generate_uuids": false, 00:28:26.836 "transport_tos": 0, 00:28:26.836 "nvme_error_stat": false, 00:28:26.836 "rdma_srq_size": 0, 00:28:26.836 "io_path_stat": false, 00:28:26.836 "allow_accel_sequence": false, 00:28:26.836 "rdma_max_cq_size": 0, 00:28:26.836 "rdma_cm_event_timeout_ms": 0, 00:28:26.836 "dhchap_digests": [ 00:28:26.836 "sha256", 00:28:26.836 "sha384", 00:28:26.836 "sha512" 00:28:26.836 ], 00:28:26.836 "dhchap_dhgroups": [ 00:28:26.836 "null", 00:28:26.836 "ffdhe2048", 00:28:26.836 "ffdhe3072", 00:28:26.836 "ffdhe4096", 00:28:26.836 "ffdhe6144", 00:28:26.836 "ffdhe8192" 00:28:26.836 ] 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "bdev_nvme_attach_controller", 00:28:26.836 "params": { 00:28:26.836 "name": "nvme0", 00:28:26.836 "trtype": "TCP", 00:28:26.836 "adrfam": "IPv4", 00:28:26.836 "traddr": "10.0.0.2", 00:28:26.836 "trsvcid": "4420", 00:28:26.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.836 "prchk_reftag": false, 00:28:26.836 "prchk_guard": false, 00:28:26.836 "ctrlr_loss_timeout_sec": 0, 00:28:26.836 "reconnect_delay_sec": 0, 00:28:26.836 "fast_io_fail_timeout_sec": 0, 00:28:26.836 "psk": "key0", 00:28:26.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.836 "hdgst": false, 00:28:26.836 "ddgst": false 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "bdev_nvme_set_hotplug", 00:28:26.836 "params": { 00:28:26.836 "period_us": 100000, 00:28:26.836 "enable": false 00:28:26.836 } 00:28:26.836 }, 00:28:26.836 { 00:28:26.836 "method": "bdev_enable_histogram", 00:28:26.836 "params": { 00:28:26.836 "name": "nvme0n1", 00:28:26.836 "enable": true 00:28:26.836 } 00:28:26.837 }, 00:28:26.837 { 00:28:26.837 "method": "bdev_wait_for_examine" 00:28:26.837 } 00:28:26.837 ] 00:28:26.837 }, 00:28:26.837 { 00:28:26.837 "subsystem": "nbd", 00:28:26.837 "config": [] 00:28:26.837 } 00:28:26.837 ] 00:28:26.837 }' 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1481543 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1481543 ']' 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1481543 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1481543 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1481543' 00:28:26.837 killing process with pid 1481543 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1481543 00:28:26.837 Received shutdown signal, test time was about 1.000000 seconds 00:28:26.837 00:28:26.837 Latency(us) 00:28:26.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.837 =================================================================================================================== 00:28:26.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.837 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1481543 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1481286 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1481286 ']' 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1481286 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1481286 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1481286' 00:28:27.096 killing process with pid 1481286 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1481286 00:28:27.096 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1481286 00:28:27.355 13:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:28:27.355 13:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.355 13:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:28:27.355 "subsystems": [ 00:28:27.355 { 00:28:27.355 "subsystem": "keyring", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "keyring_file_add_key", 00:28:27.355 "params": { 00:28:27.355 "name": "key0", 00:28:27.355 "path": "/tmp/tmp.s7tlBSc9pl" 00:28:27.355 } 00:28:27.355 } 00:28:27.355 ] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "iobuf", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "iobuf_set_options", 00:28:27.355 "params": { 00:28:27.355 "small_pool_count": 8192, 00:28:27.355 "large_pool_count": 1024, 00:28:27.355 "small_bufsize": 8192, 00:28:27.355 "large_bufsize": 135168 00:28:27.355 } 00:28:27.355 } 00:28:27.355 ] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "sock", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "sock_set_default_impl", 00:28:27.355 "params": { 00:28:27.355 "impl_name": "posix" 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "sock_impl_set_options", 00:28:27.355 "params": { 00:28:27.355 "impl_name": "ssl", 00:28:27.355 "recv_buf_size": 4096, 00:28:27.355 "send_buf_size": 4096, 00:28:27.355 "enable_recv_pipe": true, 00:28:27.355 "enable_quickack": false, 00:28:27.355 "enable_placement_id": 0, 00:28:27.355 "enable_zerocopy_send_server": true, 00:28:27.355 "enable_zerocopy_send_client": false, 00:28:27.355 "zerocopy_threshold": 0, 00:28:27.355 "tls_version": 0, 00:28:27.355 "enable_ktls": false, 00:28:27.355 "enable_new_session_tickets": true 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "sock_impl_set_options", 00:28:27.355 "params": { 00:28:27.355 "impl_name": "posix", 00:28:27.355 "recv_buf_size": 2097152, 00:28:27.355 "send_buf_size": 2097152, 00:28:27.355 "enable_recv_pipe": true, 00:28:27.355 "enable_quickack": false, 00:28:27.355 "enable_placement_id": 0, 00:28:27.355 "enable_zerocopy_send_server": true, 00:28:27.355 "enable_zerocopy_send_client": false, 00:28:27.355 "zerocopy_threshold": 0, 00:28:27.355 "tls_version": 0, 00:28:27.355 "enable_ktls": false, 00:28:27.355 "enable_new_session_tickets": false 00:28:27.355 } 00:28:27.355 } 00:28:27.355 ] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "vmd", 00:28:27.355 "config": [] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "accel", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "accel_set_options", 00:28:27.355 "params": { 00:28:27.355 "small_cache_size": 128, 00:28:27.355 "large_cache_size": 16, 00:28:27.355 "task_count": 2048, 00:28:27.355 "sequence_count": 2048, 00:28:27.355 "buf_count": 2048 00:28:27.355 } 00:28:27.355 } 00:28:27.355 ] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "bdev", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "bdev_set_options", 00:28:27.355 "params": { 00:28:27.355 "bdev_io_pool_size": 65535, 00:28:27.355 "bdev_io_cache_size": 256, 00:28:27.355 "bdev_auto_examine": true, 00:28:27.355 "iobuf_small_cache_size": 128, 00:28:27.355 "iobuf_large_cache_size": 16 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "bdev_raid_set_options", 00:28:27.355 "params": { 00:28:27.355 "process_window_size_kb": 1024 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "bdev_iscsi_set_options", 00:28:27.355 "params": { 00:28:27.355 "timeout_sec": 30 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "bdev_nvme_set_options", 00:28:27.355 "params": { 00:28:27.355 "action_on_timeout": "none", 00:28:27.355 "timeout_us": 0, 00:28:27.355 "timeout_admin_us": 0, 00:28:27.355 "keep_alive_timeout_ms": 10000, 00:28:27.355 "arbitration_burst": 0, 00:28:27.355 "low_priority_weight": 0, 00:28:27.355 "medium_priority_weight": 0, 00:28:27.355 "high_priority_weight": 0, 00:28:27.355 "nvme_adminq_poll_period_us": 10000, 00:28:27.355 "nvme_ioq_poll_period_us": 0, 00:28:27.355 "io_queue_requests": 0, 00:28:27.355 "delay_cmd_submit": true, 00:28:27.355 "transport_retry_count": 4, 00:28:27.355 "bdev_retry_count": 3, 00:28:27.355 "transport_ack_timeout": 0, 00:28:27.355 "ctrlr_loss_timeout_sec": 0, 00:28:27.355 "reconnect_delay_sec": 0, 00:28:27.355 "fast_io_fail_timeout_sec": 0, 00:28:27.355 "disable_auto_failback": false, 00:28:27.355 "generate_uuids": false, 00:28:27.355 "transport_tos": 0, 00:28:27.355 "nvme_error_stat": false, 00:28:27.355 "rdma_srq_size": 0, 00:28:27.355 "io_path_stat": false, 00:28:27.355 "allow_accel_sequence": false, 00:28:27.355 "rdma_max_cq_size": 0, 00:28:27.355 "rdma_cm_event_timeout_ms": 0, 00:28:27.355 "dhchap_digests": [ 00:28:27.355 "sha256", 00:28:27.355 "sha384", 00:28:27.355 "sha512" 00:28:27.355 ], 00:28:27.355 "dhchap_dhgroups": [ 00:28:27.355 "null", 00:28:27.355 "ffdhe2048", 00:28:27.355 "ffdhe3072", 00:28:27.355 "ffdhe4096", 00:28:27.355 "ffdhe6144", 00:28:27.355 "ffdhe8192" 00:28:27.355 ] 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "bdev_nvme_set_hotplug", 00:28:27.355 "params": { 00:28:27.355 "period_us": 100000, 00:28:27.355 "enable": false 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "bdev_malloc_create", 00:28:27.355 "params": { 00:28:27.355 "name": "malloc0", 00:28:27.355 "num_blocks": 8192, 00:28:27.355 "block_size": 4096, 00:28:27.355 "physical_block_size": 4096, 00:28:27.355 "uuid": "8cd5a1b5-a01e-4d17-bbfa-dca90d024189", 00:28:27.355 "optimal_io_boundary": 0 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "bdev_wait_for_examine" 00:28:27.355 } 00:28:27.355 ] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "nbd", 00:28:27.355 "config": [] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "scheduler", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "framework_set_scheduler", 00:28:27.355 "params": { 00:28:27.355 "name": "static" 00:28:27.355 } 00:28:27.355 } 00:28:27.355 ] 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "subsystem": "nvmf", 00:28:27.355 "config": [ 00:28:27.355 { 00:28:27.355 "method": "nvmf_set_config", 00:28:27.355 "params": { 00:28:27.355 "discovery_filter": "match_any", 00:28:27.355 "admin_cmd_passthru": { 00:28:27.355 "identify_ctrlr": false 00:28:27.355 } 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "nvmf_set_max_subsystems", 00:28:27.355 "params": { 00:28:27.355 "max_subsystems": 1024 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "nvmf_set_crdt", 00:28:27.355 "params": { 00:28:27.355 "crdt1": 0, 00:28:27.355 "crdt2": 0, 00:28:27.355 "crdt3": 0 00:28:27.355 } 00:28:27.355 }, 00:28:27.355 { 00:28:27.355 "method": "nvmf_create_transport", 00:28:27.355 "params": { 00:28:27.355 "trtype": "TCP", 00:28:27.355 "max_queue_depth": 128, 00:28:27.356 "max_io_qpairs_per_ctrlr": 127, 00:28:27.356 "in_capsule_data_size": 4096, 00:28:27.356 "max_io_size": 131072, 00:28:27.356 "io_unit_size": 131072, 00:28:27.356 "max_aq_depth": 128, 00:28:27.356 "num_shared_buffers": 511, 00:28:27.356 "buf_cache_size": 4294967295, 00:28:27.356 "dif_insert_or_strip": false, 00:28:27.356 "zcopy": false, 00:28:27.356 "c2h_success": false, 00:28:27.356 "sock_priority": 0, 00:28:27.356 "abort_timeout_sec": 1, 00:28:27.356 "ack_timeout": 0, 00:28:27.356 "data_wr_pool_size": 0 00:28:27.356 } 00:28:27.356 }, 00:28:27.356 { 00:28:27.356 "method": "nvmf_create_subsystem", 00:28:27.356 "params": { 00:28:27.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.356 "allow_any_host": false, 00:28:27.356 "serial_number": "00000000000000000000", 00:28:27.356 "model_number": "SPDK bdev Controller", 00:28:27.356 "max_namespaces": 32, 00:28:27.356 "min_cntlid": 1, 00:28:27.356 "max_cntlid": 65519, 00:28:27.356 "ana_reporting": false 00:28:27.356 } 00:28:27.356 }, 00:28:27.356 { 00:28:27.356 "method": "nvmf_subsystem_add_host", 00:28:27.356 "params": { 00:28:27.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.356 "host": "nqn.2016-06.io.spdk:host1", 00:28:27.356 "psk": "key0" 00:28:27.356 } 00:28:27.356 }, 00:28:27.356 { 00:28:27.356 "method": "nvmf_subsystem_add_ns", 00:28:27.356 "params": { 00:28:27.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.356 "namespace": { 00:28:27.356 "nsid": 1, 00:28:27.356 "bdev_name": "malloc0", 00:28:27.356 "nguid": "8CD5A1B5A01E4D17BBFADCA90D024189", 00:28:27.356 "uuid": "8cd5a1b5-a01e-4d17-bbfa-dca90d024189", 00:28:27.356 "no_auto_visible": false 00:28:27.356 } 00:28:27.356 } 00:28:27.356 }, 00:28:27.356 { 00:28:27.356 "method": "nvmf_subsystem_add_listener", 00:28:27.356 "params": { 00:28:27.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.356 "listen_address": { 00:28:27.356 "trtype": "TCP", 00:28:27.356 "adrfam": "IPv4", 00:28:27.356 "traddr": "10.0.0.2", 00:28:27.356 "trsvcid": "4420" 00:28:27.356 }, 00:28:27.356 "secure_channel": true 00:28:27.356 } 00:28:27.356 } 00:28:27.356 ] 00:28:27.356 } 00:28:27.356 ] 00:28:27.356 }' 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1482126 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1482126 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482126 ']' 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:27.356 13:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:27.356 [2024-06-10 13:56:41.778750] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:27.356 [2024-06-10 13:56:41.778817] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.615 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.616 [2024-06-10 13:56:41.902872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.616 [2024-06-10 13:56:41.985755] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.616 [2024-06-10 13:56:41.985798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.616 [2024-06-10 13:56:41.985811] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.616 [2024-06-10 13:56:41.985823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.616 [2024-06-10 13:56:41.985833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.616 [2024-06-10 13:56:41.985905] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.874 [2024-06-10 13:56:42.202470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.874 [2024-06-10 13:56:42.234466] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:27.874 [2024-06-10 13:56:42.244850] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1482313 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1482313 /var/tmp/bdevperf.sock 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482313 ']' 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:28.442 13:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:28:28.442 "subsystems": [ 00:28:28.442 { 00:28:28.442 "subsystem": "keyring", 00:28:28.442 "config": [ 00:28:28.443 { 00:28:28.443 "method": "keyring_file_add_key", 00:28:28.443 "params": { 00:28:28.443 "name": "key0", 00:28:28.443 "path": "/tmp/tmp.s7tlBSc9pl" 00:28:28.443 } 00:28:28.443 } 00:28:28.443 ] 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "subsystem": "iobuf", 00:28:28.443 "config": [ 00:28:28.443 { 00:28:28.443 "method": "iobuf_set_options", 00:28:28.443 "params": { 00:28:28.443 "small_pool_count": 8192, 00:28:28.443 "large_pool_count": 1024, 00:28:28.443 "small_bufsize": 8192, 00:28:28.443 "large_bufsize": 135168 00:28:28.443 } 00:28:28.443 } 00:28:28.443 ] 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "subsystem": "sock", 00:28:28.443 "config": [ 00:28:28.443 { 00:28:28.443 "method": "sock_set_default_impl", 00:28:28.443 "params": { 00:28:28.443 "impl_name": "posix" 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "sock_impl_set_options", 00:28:28.443 "params": { 00:28:28.443 "impl_name": "ssl", 00:28:28.443 "recv_buf_size": 4096, 00:28:28.443 "send_buf_size": 4096, 00:28:28.443 "enable_recv_pipe": true, 00:28:28.443 "enable_quickack": false, 00:28:28.443 "enable_placement_id": 0, 00:28:28.443 "enable_zerocopy_send_server": true, 00:28:28.443 "enable_zerocopy_send_client": false, 00:28:28.443 "zerocopy_threshold": 0, 00:28:28.443 "tls_version": 0, 00:28:28.443 "enable_ktls": false, 00:28:28.443 "enable_new_session_tickets": true 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "sock_impl_set_options", 00:28:28.443 "params": { 00:28:28.443 "impl_name": "posix", 00:28:28.443 "recv_buf_size": 2097152, 00:28:28.443 "send_buf_size": 2097152, 00:28:28.443 "enable_recv_pipe": true, 00:28:28.443 "enable_quickack": false, 00:28:28.443 "enable_placement_id": 0, 00:28:28.443 "enable_zerocopy_send_server": true, 00:28:28.443 "enable_zerocopy_send_client": false, 00:28:28.443 "zerocopy_threshold": 0, 00:28:28.443 "tls_version": 0, 00:28:28.443 "enable_ktls": false, 00:28:28.443 "enable_new_session_tickets": false 00:28:28.443 } 00:28:28.443 } 00:28:28.443 ] 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "subsystem": "vmd", 00:28:28.443 "config": [] 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "subsystem": "accel", 00:28:28.443 "config": [ 00:28:28.443 { 00:28:28.443 "method": "accel_set_options", 00:28:28.443 "params": { 00:28:28.443 "small_cache_size": 128, 00:28:28.443 "large_cache_size": 16, 00:28:28.443 "task_count": 2048, 00:28:28.443 "sequence_count": 2048, 00:28:28.443 "buf_count": 2048 00:28:28.443 } 00:28:28.443 } 00:28:28.443 ] 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "subsystem": "bdev", 00:28:28.443 "config": [ 00:28:28.443 { 00:28:28.443 "method": "bdev_set_options", 00:28:28.443 "params": { 00:28:28.443 "bdev_io_pool_size": 65535, 00:28:28.443 "bdev_io_cache_size": 256, 00:28:28.443 "bdev_auto_examine": true, 00:28:28.443 "iobuf_small_cache_size": 128, 00:28:28.443 "iobuf_large_cache_size": 16 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_raid_set_options", 00:28:28.443 "params": { 00:28:28.443 "process_window_size_kb": 1024 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_iscsi_set_options", 00:28:28.443 "params": { 00:28:28.443 "timeout_sec": 30 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_nvme_set_options", 00:28:28.443 "params": { 00:28:28.443 "action_on_timeout": "none", 00:28:28.443 "timeout_us": 0, 00:28:28.443 "timeout_admin_us": 0, 00:28:28.443 "keep_alive_timeout_ms": 10000, 00:28:28.443 "arbitration_burst": 0, 00:28:28.443 "low_priority_weight": 0, 00:28:28.443 "medium_priority_weight": 0, 00:28:28.443 "high_priority_weight": 0, 00:28:28.443 "nvme_adminq_poll_period_us": 10000, 00:28:28.443 "nvme_ioq_poll_period_us": 0, 00:28:28.443 "io_queue_requests": 512, 00:28:28.443 "delay_cmd_submit": true, 00:28:28.443 "transport_retry_count": 4, 00:28:28.443 "bdev_retry_count": 3, 00:28:28.443 "transport_ack_timeout": 0, 00:28:28.443 "ctrlr_loss_timeout_sec": 0, 00:28:28.443 "reconnect_delay_sec": 0, 00:28:28.443 "fast_io_fail_timeout_sec": 0, 00:28:28.443 "disable_auto_failback": false, 00:28:28.443 "generate_uuids": false, 00:28:28.443 "transport_tos": 0, 00:28:28.443 "nvme_error_stat": false, 00:28:28.443 "rdma_srq_size": 0, 00:28:28.443 "io_path_stat": false, 00:28:28.443 "allow_accel_sequence": false, 00:28:28.443 "rdma_max_cq_size": 0, 00:28:28.443 "rdma_cm_event_timeout_ms": 0, 00:28:28.443 "dhchap_digests": [ 00:28:28.443 "sha256", 00:28:28.443 "sha384", 00:28:28.443 "sha512" 00:28:28.443 ], 00:28:28.443 "dhchap_dhgroups": [ 00:28:28.443 "null", 00:28:28.443 "ffdhe2048", 00:28:28.443 "ffdhe3072", 00:28:28.443 "ffdhe4096", 00:28:28.443 "ffdhe6144", 00:28:28.443 "ffdhe8192" 00:28:28.443 ] 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_nvme_attach_controller", 00:28:28.443 "params": { 00:28:28.443 "name": "nvme0", 00:28:28.443 "trtype": "TCP", 00:28:28.443 "adrfam": "IPv4", 00:28:28.443 "traddr": "10.0.0.2", 00:28:28.443 "trsvcid": "4420", 00:28:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.443 "prchk_reftag": false, 00:28:28.443 "prchk_guard": false, 00:28:28.443 "ctrlr_loss_timeout_sec": 0, 00:28:28.443 "reconnect_delay_sec": 0, 00:28:28.443 "fast_io_fail_timeout_sec": 0, 00:28:28.443 "psk": "key0", 00:28:28.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.443 "hdgst": false, 00:28:28.443 "ddgst": false 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_nvme_set_hotplug", 00:28:28.443 "params": { 00:28:28.443 "period_us": 100000, 00:28:28.443 "enable": false 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_enable_histogram", 00:28:28.443 "params": { 00:28:28.443 "name": "nvme0n1", 00:28:28.443 "enable": true 00:28:28.443 } 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "method": "bdev_wait_for_examine" 00:28:28.443 } 00:28:28.443 ] 00:28:28.443 }, 00:28:28.443 { 00:28:28.443 "subsystem": "nbd", 00:28:28.443 "config": [] 00:28:28.443 } 00:28:28.443 ] 00:28:28.443 }' 00:28:28.443 [2024-06-10 13:56:42.774812] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:28.443 [2024-06-10 13:56:42.774875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482313 ] 00:28:28.443 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.443 [2024-06-10 13:56:42.884556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.702 [2024-06-10 13:56:42.966236] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.702 [2024-06-10 13:56:43.123311] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:29.268 13:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:29.268 13:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:28:29.268 13:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.268 13:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:28:29.527 13:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.527 13:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:29.527 Running I/O for 1 seconds... 00:28:30.904 00:28:30.904 Latency(us) 00:28:30.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.904 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.905 Verification LBA range: start 0x0 length 0x2000 00:28:30.905 nvme0n1 : 1.04 3515.84 13.73 0.00 0.00 35808.66 6343.88 63753.42 00:28:30.905 =================================================================================================================== 00:28:30.905 Total : 3515.84 13.73 0.00 0.00 35808.66 6343.88 63753.42 00:28:30.905 0 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:30.905 nvmf_trace.0 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1482313 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482313 ']' 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482313 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482313 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482313' 00:28:30.905 killing process with pid 1482313 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482313 00:28:30.905 Received shutdown signal, test time was about 1.000000 seconds 00:28:30.905 00:28:30.905 Latency(us) 00:28:30.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.905 =================================================================================================================== 00:28:30.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482313 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.905 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.905 rmmod nvme_tcp 00:28:31.163 rmmod nvme_fabrics 00:28:31.163 rmmod nvme_keyring 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1482126 ']' 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1482126 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482126 ']' 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482126 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482126 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482126' 00:28:31.163 killing process with pid 1482126 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482126 00:28:31.163 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482126 00:28:31.421 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.422 13:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.328 13:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.328 13:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mSof1nV6oG /tmp/tmp.F2KW9NR7WD /tmp/tmp.s7tlBSc9pl 00:28:33.328 00:28:33.328 real 1m34.115s 00:28:33.328 user 2m20.996s 00:28:33.328 sys 0m36.480s 00:28:33.328 13:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:33.328 13:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:33.328 ************************************ 00:28:33.328 END TEST nvmf_tls 00:28:33.328 ************************************ 00:28:33.588 13:56:47 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:28:33.588 13:56:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:33.588 13:56:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:33.588 13:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.588 ************************************ 00:28:33.588 START TEST nvmf_fips 00:28:33.588 ************************************ 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:28:33.588 * Looking for test storage... 00:28:33.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.588 13:56:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.588 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.588 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.588 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.589 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:28:33.849 Error setting digest 00:28:33.849 00327CC7FE7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:28:33.849 00327CC7FE7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.849 13:56:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:41.971 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:41.971 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:41.971 Found net devices under 0000:af:00.0: cvl_0_0 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:41.971 Found net devices under 0000:af:00.1: cvl_0_1 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.971 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.972 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:42.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:28:42.232 00:28:42.232 --- 10.0.0.2 ping statistics --- 00:28:42.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.232 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:42.232 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:28:42.490 00:28:42.490 --- 10.0.0.1 ping statistics --- 00:28:42.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.490 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1487154 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1487154 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1487154 ']' 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:42.490 13:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:42.490 [2024-06-10 13:56:56.837649] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:42.490 [2024-06-10 13:56:56.837715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.490 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.490 [2024-06-10 13:56:56.954720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.763 [2024-06-10 13:56:57.041212] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.763 [2024-06-10 13:56:57.041257] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.763 [2024-06-10 13:56:57.041271] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.763 [2024-06-10 13:56:57.041283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.763 [2024-06-10 13:56:57.041293] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.763 [2024-06-10 13:56:57.041319] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:43.351 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:43.352 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:43.352 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:43.352 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:43.352 13:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.611 [2024-06-10 13:56:57.974039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.611 [2024-06-10 13:56:57.990038] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:43.611 [2024-06-10 13:56:57.990238] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.611 [2024-06-10 13:56:58.019414] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:43.611 malloc0 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1487441 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1487441 /var/tmp/bdevperf.sock 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1487441 ']' 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:43.611 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:43.871 [2024-06-10 13:56:58.112995] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:28:43.871 [2024-06-10 13:56:58.113063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487441 ] 00:28:43.871 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.871 [2024-06-10 13:56:58.206949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.871 [2024-06-10 13:56:58.276352] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.807 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:44.807 13:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:28:44.807 13:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:44.807 [2024-06-10 13:56:59.166354] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:44.807 [2024-06-10 13:56:59.166444] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:44.807 TLSTESTn1 00:28:44.807 13:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:45.066 Running I/O for 10 seconds... 00:28:55.043 00:28:55.043 Latency(us) 00:28:55.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.043 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:55.043 Verification LBA range: start 0x0 length 0x2000 00:28:55.043 TLSTESTn1 : 10.03 3486.84 13.62 0.00 0.00 36635.68 5373.95 51799.65 00:28:55.043 =================================================================================================================== 00:28:55.043 Total : 3486.84 13.62 0.00 0.00 36635.68 5373.95 51799.65 00:28:55.043 0 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:28:55.043 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:55.043 nvmf_trace.0 00:28:55.301 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:28:55.301 13:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1487441 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1487441 ']' 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1487441 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1487441 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1487441' 00:28:55.302 killing process with pid 1487441 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1487441 00:28:55.302 Received shutdown signal, test time was about 10.000000 seconds 00:28:55.302 00:28:55.302 Latency(us) 00:28:55.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.302 =================================================================================================================== 00:28:55.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.302 [2024-06-10 13:57:09.611125] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:55.302 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1487441 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.561 rmmod nvme_tcp 00:28:55.561 rmmod nvme_fabrics 00:28:55.561 rmmod nvme_keyring 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1487154 ']' 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1487154 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1487154 ']' 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1487154 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1487154 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1487154' 00:28:55.561 killing process with pid 1487154 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1487154 00:28:55.561 [2024-06-10 13:57:09.917944] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:55.561 13:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1487154 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.820 13:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.354 13:57:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.354 13:57:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:58.354 00:28:58.354 real 0m24.354s 00:28:58.354 user 0m23.590s 00:28:58.354 sys 0m12.239s 00:28:58.354 13:57:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:58.354 13:57:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:58.354 ************************************ 00:28:58.354 END TEST nvmf_fips 00:28:58.354 ************************************ 00:28:58.354 13:57:12 nvmf_tcp -- nvmf/nvmf.sh@63 -- # run_test nvmf_kernel_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/spdk_vs_kernel_tls.sh --transport=tcp 00:28:58.354 13:57:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:28:58.354 13:57:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:58.354 13:57:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.354 ************************************ 00:28:58.354 START TEST nvmf_kernel_tls 00:28:58.354 ************************************ 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/spdk_vs_kernel_tls.sh --transport=tcp 00:28:58.354 Joined session keyring: 791306787 00:28:58.354 * Looking for test storage... 00:28:58.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@7 -- # uname -s 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.354 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@5 -- # export PATH 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@47 -- # : 0 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@13 -- # fio_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@14 -- # bdevperf_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@16 -- # SPEC_KEY=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@19 -- # SPEC_SUBSYSNQN=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@20 -- # SPEC_HOSTID=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@21 -- # SPEC_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@22 -- # PSK_IDENTITY='NVMe0R01 nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2' 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@23 -- # TLSHD_CONF=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/tlshd.conf 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@24 -- # SPDK_PSK_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@25 -- # PSK_NAME=psk0 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@26 -- # CONTROLLER_NAME=TLSTEST 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@27 -- # nvmet=/sys/kernel/config/nvmet 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@28 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@29 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@30 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@31 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@93 -- # '[' tcp '!=' tcp ']' 00:28:58.355 13:57:12 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:01.642 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:01.642 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@100 -- # nvmftestinit 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.642 13:57:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.643 13:57:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.643 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:01.643 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:01.643 13:57:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:29:01.643 13:57:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@295 -- # net_devs=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@296 -- # e810=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@296 -- # local -ga e810 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@297 -- # x722=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@297 -- # local -ga x722 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@298 -- # mlx=() 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:11.621 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:11.622 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:11.622 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:11.622 Found net devices under 0000:af:00.0: cvl_0_0 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:11.622 Found net devices under 0000:af:00.1: cvl_0_1 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:11.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:29:11.622 00:29:11.622 --- 10.0.0.2 ping statistics --- 00:29:11.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.622 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:29:11.622 00:29:11.622 --- 10.0.0.1 ping statistics --- 00:29:11.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.622 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@422 -- # return 0 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@102 -- # timing_enter prepare_keyring_and_daemon 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@104 -- # keyctl show 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@104 -- # awk '{print $1}' 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@104 -- # tail -1 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@104 -- # session_id=791306787 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@105 -- # keyring_name=test_791306787 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@106 -- # keyctl newring test_791306787 791306787 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@106 -- # keyring_id=277676769 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@107 -- # keyctl setperm 277676769 0x3f3f0b00 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@109 -- # key_name=test_key_791306787 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/tls_psk_print -k NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: -s nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -n nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@111 -- # keyctl add psk 'NVMe0R01 nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2' '��f�j��i��F�{��=8���&LM��u�F' 277676769 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@111 -- # key_id=565729572 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@113 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@114 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@116 -- # construct_tlshd_conf test_791306787 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@48 -- # local keyring_name=test_791306787 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@49 -- # cat 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@118 -- # tlshdpid=1496003 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@117 -- # tlshd -s -c /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/tlshd.conf 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@120 -- # timing_exit prepare_keyring_and_daemon 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.622 tlshd[1496003]: Built from ktls-utils 0.10 on Oct 7 2023 00:00:00 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@123 -- # timing_enter start_nvmf_tgt 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:11.622 13:57:24 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.622 13:57:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@125 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:29:11.622 13:57:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:11.622 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:11.622 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@481 -- # nvmfpid=1496023 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@482 -- # waitforlisten 1496023 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@830 -- # '[' -z 1496023 ']' 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.623 [2024-06-10 13:57:25.048098] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:29:11.623 [2024-06-10 13:57:25.048152] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.623 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.623 [2024-06-10 13:57:25.149790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.623 [2024-06-10 13:57:25.234346] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.623 [2024-06-10 13:57:25.234390] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.623 [2024-06-10 13:57:25.234404] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.623 [2024-06-10 13:57:25.234416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.623 [2024-06-10 13:57:25.234427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.623 [2024-06-10 13:57:25.234451] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@863 -- # return 0 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:11.623 13:57:25 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@126 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@127 -- # waitforlisten 1496023 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@830 -- # '[' -z 1496023 ']' 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:11.623 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.881 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:11.881 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@863 -- # return 0 00:29:11.881 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@128 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@77 -- # local psk_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@79 -- # rpc_cmd sock_impl_set_options -i ssl --enable-ktls --tls-version 13 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@80 -- # rpc_cmd framework_start_init 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@81 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.882 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:11.882 [2024-06-10 13:57:26.350572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -s SPDKISFASTANDAWESOME -m 10 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -t tcp -a 10.0.0.2 -s 4420 -k -c 1 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 [2024-06-10 13:57:26.370595] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:12.141 [2024-06-10 13:57:26.370838] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@85 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 malloc0 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@86 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 malloc0 -n 1 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@88 -- # rpc_cmd keyring_file_add_key psk0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@90 -- # rpc_cmd nvmf_subsystem_add_host nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --psk psk0 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@130 -- # timing_exit start_nvmf_tgt 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:29:12.141 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@132 -- # nvme connect --nqn=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 --traddr=10.0.0.2 --trsvcid=4420 --transport=tcp --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --hostid=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --tls -o normal --verbose --tls_key=565729572 --keyring=277676769 -i 1 00:29:12.399 tlshd[1496329]: Name or service not known 00:29:12.399 tlshd[1496329]: Handshake with unknown (10.0.0.2) was successful 00:29:12.658 tlshd[1496335]: Name or service not known 00:29:12.658 tlshd[1496335]: Handshake with unknown (10.0.0.2) was successful 00:29:12.658 nvme0: nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 connected 00:29:12.658 device: nvme0 00:29:12.658 13:57:26 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@136 -- # waitforserial SPDKISFASTANDAWESOME 00:29:12.658 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1197 -- # local i=0 00:29:12.658 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:29:12.658 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:29:12.658 13:57:26 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1204 -- # sleep 2 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1207 -- # return 0 00:29:14.561 13:57:28 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 5 00:29:14.561 [global] 00:29:14.561 thread=1 00:29:14.561 invalidate=1 00:29:14.561 rw=read 00:29:14.561 time_based=1 00:29:14.561 runtime=5 00:29:14.561 ioengine=libaio 00:29:14.561 direct=1 00:29:14.561 bs=4096 00:29:14.561 iodepth=1 00:29:14.561 norandommap=1 00:29:14.561 numjobs=1 00:29:14.561 00:29:14.561 [job0] 00:29:14.561 filename=/dev/nvme0n1 00:29:14.835 Could not set queue depth (nvme0n1) 00:29:15.095 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:15.095 fio-3.35 00:29:15.095 Starting 1 thread 00:29:47.150 [2024-06-10 13:57:57.354394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:47.150 [2024-06-10 13:57:57.432901] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:53.761 tlshd[1503260]: Name or service not known 00:29:53.761 tlshd[1503260]: Handshake with unknown (10.0.0.2) was successful 00:29:53.761 tlshd[1503319]: Name or service not known 00:29:53.761 tlshd[1503319]: Handshake with unknown (10.0.0.2) was successful 00:29:53.761 00:29:53.761 job0: (groupid=0, jobs=1): err= 0: pid=1496917: Mon Jun 10 13:58:08 2024 00:29:53.761 read: IOPS=0, BW=106B/s (106B/s)(4096B/38501msec) 00:29:53.761 slat (nsec): min=45194, max=45194, avg=45194.00, stdev= 0.00 00:29:53.761 clat (nsec): min=38500M, max=38500M, avg=38499859396.00, stdev= 0.00 00:29:53.761 lat (nsec): min=38500M, max=38500M, avg=38499904590.00, stdev= 0.00 00:29:53.761 clat percentiles (msec): 00:29:53.761 | 1.00th=[17113], 5.00th=[17113], 10.00th=[17113], 20.00th=[17113], 00:29:53.761 | 30.00th=[17113], 40.00th=[17113], 50.00th=[17113], 60.00th=[17113], 00:29:53.761 | 70.00th=[17113], 80.00th=[17113], 90.00th=[17113], 95.00th=[17113], 00:29:53.761 | 99.00th=[17113], 99.50th=[17113], 99.90th=[17113], 99.95th=[17113], 00:29:53.761 | 99.99th=[17113] 00:29:53.761 lat (msec) : >=2000=100.00% 00:29:53.761 cpu : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=1 00:29:53.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.761 issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:53.761 00:29:53.761 Run status group 0 (all jobs): 00:29:53.761 READ: bw=106B/s (106B/s), 106B/s-106B/s (106B/s-106B/s), io=4096B (4096B), run=38501-38501msec 00:29:53.761 00:29:53.761 Disk stats (read/write): 00:29:53.761 nvme0n1: ios=5/0, merge=0/0, ticks=152191/0, in_queue=152191, util=99.84% 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@142 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@140 -- # nvme disconnect --nqn=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1218 -- # local i=0 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1220 -- # '[' 0 -lt 15 ']' 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1221 -- # i=1 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1222 -- # echo 'Waiting for disconnect devices' 00:29:53.761 Waiting for disconnect devices 00:29:53.761 13:58:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1223 -- # sleep 1 00:29:53.761 [2024-06-10 13:58:08.067775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:53.761 [2024-06-10 13:58:08.068442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:53.761 NQN:nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 disconnected 1 controller(s) 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1230 -- # return 0 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@143 -- # killprocess 1496023 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 1496023 ']' 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 1496023 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # uname 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1496023 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1496023' 00:29:54.694 killing process with pid 1496023 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@968 -- # kill 1496023 00:29:54.694 13:58:09 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@973 -- # wait 1496023 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@146 -- # nvmet_tls_init 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@72 -- # get_main_ns_ip 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@747 -- # local ip 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@72 -- # configure_kernel_target nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 10.0.0.1 4422 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@632 -- # local kernel_name=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 kernel_target_ip=10.0.0.1 nvmf_port=4422 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/namespaces/1 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@639 -- # local block nvme 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:54.952 13:58:09 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:59.136 Waiting for block devices as requested 00:29:59.136 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:59.136 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:59.136 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:59.136 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:59.395 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:59.395 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:59.395 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:59.654 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:59.654 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:59.654 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:59.912 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:59.912 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:59.912 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:00.170 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:00.170 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:00.170 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:00.428 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:00.428 13:58:14 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:00.687 No valid GPT data, bailing 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@391 -- # pt= 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@392 -- # return 1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/namespaces/1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@663 -- # echo SPDK-test 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@665 -- # echo 1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@674 -- # echo 1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@677 -- # echo tcp 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@678 -- # echo 4422 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@679 -- # echo ipv4 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:00.687 13:58:14 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4422 00:30:00.687 00:30:00.687 Discovery Log Number of Records 2, Generation counter 2 00:30:00.687 =====Discovery Log Entry 0====== 00:30:00.687 trtype: tcp 00:30:00.687 adrfam: ipv4 00:30:00.687 subtype: current discovery subsystem 00:30:00.687 treq: not specified, sq flow control disable supported 00:30:00.687 portid: 1 00:30:00.687 trsvcid: 4422 00:30:00.687 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:00.687 traddr: 10.0.0.1 00:30:00.687 eflags: none 00:30:00.687 sectype: none 00:30:00.687 =====Discovery Log Entry 1====== 00:30:00.687 trtype: tcp 00:30:00.687 adrfam: ipv4 00:30:00.687 subtype: nvme subsystem 00:30:00.687 treq: not specified, sq flow control disable supported 00:30:00.687 portid: 1 00:30:00.687 trsvcid: 4422 00:30:00.687 subnqn: nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:30:00.687 traddr: 10.0.0.1 00:30:00.687 eflags: none 00:30:00.687 sectype: none 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@73 -- # post_configure_kernel_target 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@61 -- # echo 0 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@62 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@63 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/allowed_hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@66 -- # rm /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@67 -- # echo tls1.3 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@68 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@149 -- # bdevperfpid=1505505 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@150 -- # waitforlisten 1505505 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@830 -- # '[' -z 1505505 ']' 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:00.687 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.687 [2024-06-10 13:58:15.124118] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:30:00.687 [2024-06-10 13:58:15.124184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505505 ] 00:30:00.944 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.944 [2024-06-10 13:58:15.216973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.944 [2024-06-10 13:58:15.288763] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@863 -- # return 0 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@152 -- # rpc_cmd keyring_file_add_key psk0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@154 -- # get_main_ns_ip 00:30:01.875 13:58:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@747 -- # local ip 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # ip_candidates=() 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # local -A ip_candidates 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.875 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@154 -- # rpc_cmd bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.1 -s 4422 -f ipv4 -n nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -q nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --psk psk0 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.876 [2024-06-10 13:58:16.009906] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:01.876 tlshd[1505765]: Handshake with spdk-wfp-20 (10.0.0.1) was successful 00:30:01.876 tlshd[1505767]: Handshake with spdk-wfp-20 (10.0.0.1) was successful 00:30:01.876 TLSTESTn1 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@157 -- # rpc_cmd bdev_nvme_get_controllers -n TLSTEST 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.876 [ 00:30:01.876 { 00:30:01.876 "name": "TLSTEST", 00:30:01.876 "ctrlrs": [ 00:30:01.876 { 00:30:01.876 "state": "enabled", 00:30:01.876 "trid": { 00:30:01.876 "trtype": "TCP", 00:30:01.876 "adrfam": "IPv4", 00:30:01.876 "traddr": "10.0.0.1", 00:30:01.876 "trsvcid": "4422", 00:30:01.876 "subnqn": "nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2" 00:30:01.876 }, 00:30:01.876 "cntlid": 1, 00:30:01.876 "host": { 00:30:01.876 "nqn": "nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6", 00:30:01.876 "addr": "", 00:30:01.876 "svcid": "" 00:30:01.876 } 00:30:01.876 } 00:30:01.876 ] 00:30:01.876 } 00:30:01.876 ] 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.876 13:58:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@159 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -o 1024 -t 5 -w read 00:30:01.876 Running I/O for 5 seconds... 00:30:01.876 tlshd[1505773]: Handshake with spdk-wfp-20 (10.0.0.1) was successful 00:30:07.136 00:30:07.136 Latency(us) 00:30:07.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.136 Job: TLSTESTn1 (Core Mask 0x4, workload: read, depth: 1, IO size: 1024) 00:30:07.136 TLSTESTn1 : 5.00 12155.85 11.87 0.00 0.00 79.61 72.91 3722.44 00:30:07.136 =================================================================================================================== 00:30:07.136 Total : 12155.85 11.87 0.00 0.00 79.61 72.91 3722.44 00:30:07.136 0 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@161 -- # rpc_cmd bdev_nvme_detach_controller TLSTEST 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@163 -- # trap - SIGINT SIGTERM EXIT 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@164 -- # cleanup 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@34 -- # killprocess 1496003 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 1496003 ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 1496003 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # uname 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1496003 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # process_name=tlshd 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@959 -- # '[' tlshd = sudo ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1496003' 00:30:07.136 killing process with pid 1496003 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@968 -- # kill 1496003 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@973 -- # wait 1496003 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@34 -- # : 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@35 -- # killprocess 1505505 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 1505505 ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 1505505 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # uname 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1505505 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1505505' 00:30:07.136 killing process with pid 1505505 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@968 -- # kill 1505505 00:30:07.136 Received shutdown signal, test time was about 5.000000 seconds 00:30:07.136 00:30:07.136 Latency(us) 00:30:07.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.136 =================================================================================================================== 00:30:07.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@973 -- # wait 1505505 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@36 -- # nvmftestfini 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@117 -- # sync 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@120 -- # set +e 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.136 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.136 rmmod nvme_tcp 00:30:07.395 rmmod nvme_fabrics 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@124 -- # set -e 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@125 -- # return 0 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@489 -- # '[' -n 1496023 ']' 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@490 -- # killprocess 1496023 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 1496023 ']' 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 1496023 00:30:07.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1496023) - No such process 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@976 -- # echo 'Process with pid 1496023 is not found' 00:30:07.395 Process with pid 1496023 is not found 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.395 13:58:21 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.292 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.292 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@37 -- # rm -rf /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/allowed_hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:30:09.292 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@38 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:30:09.292 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@39 -- # clean_kernel_target 00:30:09.292 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 ]] 00:30:09.292 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@691 -- # echo 0 00:30:09.293 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:30:09.293 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/namespaces/1 00:30:09.293 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:09.551 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:30:09.551 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:30:09.551 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:30:09.551 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:30:09.551 13:58:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:13.732 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:13.732 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:15.109 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:30:15.109 13:58:29 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@40 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:30:15.109 13:58:29 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@41 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/tlshd.conf 00:30:15.109 13:58:29 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:19.298 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:19.298 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:19.298 00:30:19.298 real 1m20.977s 00:30:19.298 user 1m4.062s 00:30:19.298 sys 0m22.656s 00:30:19.298 13:58:33 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:19.298 13:58:33 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:30:19.298 ************************************ 00:30:19.298 END TEST nvmf_kernel_tls 00:30:19.298 ************************************ 00:30:19.298 13:58:33 nvmf_tcp -- nvmf/nvmf.sh@67 -- # '[' 0 -eq 1 ']' 00:30:19.298 13:58:33 nvmf_tcp -- nvmf/nvmf.sh@73 -- # [[ phy == phy ]] 00:30:19.298 13:58:33 nvmf_tcp -- nvmf/nvmf.sh@74 -- # '[' tcp = tcp ']' 00:30:19.298 13:58:33 nvmf_tcp -- nvmf/nvmf.sh@75 -- # gather_supported_nvmf_pci_devs 00:30:19.298 13:58:33 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:30:19.298 13:58:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:30:27.414 13:58:41 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:27.415 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:27.415 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:27.415 Found net devices under 0000:af:00.0: cvl_0_0 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:27.415 Found net devices under 0000:af:00.1: cvl_0_1 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/nvmf.sh@76 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/nvmf.sh@77 -- # (( 2 > 0 )) 00:30:27.415 13:58:41 nvmf_tcp -- nvmf/nvmf.sh@78 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:27.415 13:58:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:27.415 13:58:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:27.415 13:58:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.415 ************************************ 00:30:27.415 START TEST nvmf_perf_adq 00:30:27.415 ************************************ 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:27.415 * Looking for test storage... 00:30:27.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.415 13:58:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.416 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:37.386 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:37.387 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:37.387 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:37.387 Found net devices under 0000:af:00.0: cvl_0_0 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:37.387 Found net devices under 0000:af:00.1: cvl_0_1 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:30:37.387 13:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:37.387 13:58:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:39.337 13:58:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:44.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.612 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:44.613 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:44.613 Found net devices under 0000:af:00.0: cvl_0_0 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:44.613 Found net devices under 0000:af:00.1: cvl_0_1 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:44.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:30:44.613 00:30:44.613 --- 10.0.0.2 ping statistics --- 00:30:44.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.613 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:30:44.613 00:30:44.613 --- 10.0.0.1 ping statistics --- 00:30:44.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.613 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1519850 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1519850 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1519850 ']' 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:44.613 13:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:44.613 [2024-06-10 13:58:58.985436] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:30:44.613 [2024-06-10 13:58:58.985496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.873 [2024-06-10 13:58:59.112933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.873 [2024-06-10 13:58:59.199294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.873 [2024-06-10 13:58:59.199338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.873 [2024-06-10 13:58:59.199352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.873 [2024-06-10 13:58:59.199364] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.873 [2024-06-10 13:58:59.199374] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.873 [2024-06-10 13:58:59.199471] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.873 [2024-06-10 13:58:59.199586] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.873 [2024-06-10 13:58:59.200071] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.873 [2024-06-10 13:58:59.200074] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.443 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:45.443 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:30:45.443 13:58:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:45.443 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:45.443 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:58:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 [2024-06-10 13:59:00.091101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 Malloc1 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:45.702 [2024-06-10 13:59:00.142818] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1520027 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:30:45.702 13:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:45.961 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:30:47.867 "tick_rate": 2500000000, 00:30:47.867 "poll_groups": [ 00:30:47.867 { 00:30:47.867 "name": "nvmf_tgt_poll_group_000", 00:30:47.867 "admin_qpairs": 1, 00:30:47.867 "io_qpairs": 1, 00:30:47.867 "current_admin_qpairs": 1, 00:30:47.867 "current_io_qpairs": 1, 00:30:47.867 "pending_bdev_io": 0, 00:30:47.867 "completed_nvme_io": 16195, 00:30:47.867 "transports": [ 00:30:47.867 { 00:30:47.867 "trtype": "TCP" 00:30:47.867 } 00:30:47.867 ] 00:30:47.867 }, 00:30:47.867 { 00:30:47.867 "name": "nvmf_tgt_poll_group_001", 00:30:47.867 "admin_qpairs": 0, 00:30:47.867 "io_qpairs": 1, 00:30:47.867 "current_admin_qpairs": 0, 00:30:47.867 "current_io_qpairs": 1, 00:30:47.867 "pending_bdev_io": 0, 00:30:47.867 "completed_nvme_io": 19443, 00:30:47.867 "transports": [ 00:30:47.867 { 00:30:47.867 "trtype": "TCP" 00:30:47.867 } 00:30:47.867 ] 00:30:47.867 }, 00:30:47.867 { 00:30:47.867 "name": "nvmf_tgt_poll_group_002", 00:30:47.867 "admin_qpairs": 0, 00:30:47.867 "io_qpairs": 1, 00:30:47.867 "current_admin_qpairs": 0, 00:30:47.867 "current_io_qpairs": 1, 00:30:47.867 "pending_bdev_io": 0, 00:30:47.867 "completed_nvme_io": 16212, 00:30:47.867 "transports": [ 00:30:47.867 { 00:30:47.867 "trtype": "TCP" 00:30:47.867 } 00:30:47.867 ] 00:30:47.867 }, 00:30:47.867 { 00:30:47.867 "name": "nvmf_tgt_poll_group_003", 00:30:47.867 "admin_qpairs": 0, 00:30:47.867 "io_qpairs": 1, 00:30:47.867 "current_admin_qpairs": 0, 00:30:47.867 "current_io_qpairs": 1, 00:30:47.867 "pending_bdev_io": 0, 00:30:47.867 "completed_nvme_io": 15606, 00:30:47.867 "transports": [ 00:30:47.867 { 00:30:47.867 "trtype": "TCP" 00:30:47.867 } 00:30:47.867 ] 00:30:47.867 } 00:30:47.867 ] 00:30:47.867 }' 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:30:47.867 13:59:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1520027 00:30:55.991 Initializing NVMe Controllers 00:30:55.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:55.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:55.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:55.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:55.991 Initialization complete. Launching workers. 00:30:55.992 ======================================================== 00:30:55.992 Latency(us) 00:30:55.992 Device Information : IOPS MiB/s Average min max 00:30:55.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8381.70 32.74 7635.43 2085.91 49393.21 00:30:55.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10264.50 40.10 6235.00 1614.22 10739.02 00:30:55.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8587.30 33.54 7453.30 2384.48 11886.30 00:30:55.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8569.00 33.47 7467.93 1984.44 12596.26 00:30:55.992 ======================================================== 00:30:55.992 Total : 35802.50 139.85 7150.16 1614.22 49393.21 00:30:55.992 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:55.992 rmmod nvme_tcp 00:30:55.992 rmmod nvme_fabrics 00:30:55.992 rmmod nvme_keyring 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1519850 ']' 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1519850 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1519850 ']' 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1519850 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1519850 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1519850' 00:30:55.992 killing process with pid 1519850 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1519850 00:30:55.992 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1519850 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.251 13:59:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.788 13:59:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:58.788 13:59:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:30:58.788 13:59:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:59.726 13:59:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:31:02.263 13:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:07.541 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:07.541 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.541 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:07.542 Found net devices under 0000:af:00.0: cvl_0_0 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:07.542 Found net devices under 0000:af:00.1: cvl_0_1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:07.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:31:07.542 00:31:07.542 --- 10.0.0.2 ping statistics --- 00:31:07.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.542 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:31:07.542 00:31:07.542 --- 10.0.0.1 ping statistics --- 00:31:07.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.542 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:31:07.542 net.core.busy_poll = 1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:31:07.542 net.core.busy_read = 1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1523964 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1523964 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1523964 ']' 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:07.542 13:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:07.802 [2024-06-10 13:59:22.026781] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:07.802 [2024-06-10 13:59:22.026843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.802 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.802 [2024-06-10 13:59:22.154909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.802 [2024-06-10 13:59:22.247348] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.802 [2024-06-10 13:59:22.247391] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.802 [2024-06-10 13:59:22.247409] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.802 [2024-06-10 13:59:22.247420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.802 [2024-06-10 13:59:22.247431] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.802 [2024-06-10 13:59:22.247482] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.802 [2024-06-10 13:59:22.247510] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.803 [2024-06-10 13:59:22.247639] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.803 [2024-06-10 13:59:22.247643] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 [2024-06-10 13:59:23.082367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 Malloc1 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.739 [2024-06-10 13:59:23.138010] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1524255 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:31:08.739 13:59:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:08.739 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:31:11.274 "tick_rate": 2500000000, 00:31:11.274 "poll_groups": [ 00:31:11.274 { 00:31:11.274 "name": "nvmf_tgt_poll_group_000", 00:31:11.274 "admin_qpairs": 1, 00:31:11.274 "io_qpairs": 3, 00:31:11.274 "current_admin_qpairs": 1, 00:31:11.274 "current_io_qpairs": 3, 00:31:11.274 "pending_bdev_io": 0, 00:31:11.274 "completed_nvme_io": 23093, 00:31:11.274 "transports": [ 00:31:11.274 { 00:31:11.274 "trtype": "TCP" 00:31:11.274 } 00:31:11.274 ] 00:31:11.274 }, 00:31:11.274 { 00:31:11.274 "name": "nvmf_tgt_poll_group_001", 00:31:11.274 "admin_qpairs": 0, 00:31:11.274 "io_qpairs": 1, 00:31:11.274 "current_admin_qpairs": 0, 00:31:11.274 "current_io_qpairs": 1, 00:31:11.274 "pending_bdev_io": 0, 00:31:11.274 "completed_nvme_io": 26899, 00:31:11.274 "transports": [ 00:31:11.274 { 00:31:11.274 "trtype": "TCP" 00:31:11.274 } 00:31:11.274 ] 00:31:11.274 }, 00:31:11.274 { 00:31:11.274 "name": "nvmf_tgt_poll_group_002", 00:31:11.274 "admin_qpairs": 0, 00:31:11.274 "io_qpairs": 0, 00:31:11.274 "current_admin_qpairs": 0, 00:31:11.274 "current_io_qpairs": 0, 00:31:11.274 "pending_bdev_io": 0, 00:31:11.274 "completed_nvme_io": 0, 00:31:11.274 "transports": [ 00:31:11.274 { 00:31:11.274 "trtype": "TCP" 00:31:11.274 } 00:31:11.274 ] 00:31:11.274 }, 00:31:11.274 { 00:31:11.274 "name": "nvmf_tgt_poll_group_003", 00:31:11.274 "admin_qpairs": 0, 00:31:11.274 "io_qpairs": 0, 00:31:11.274 "current_admin_qpairs": 0, 00:31:11.274 "current_io_qpairs": 0, 00:31:11.274 "pending_bdev_io": 0, 00:31:11.274 "completed_nvme_io": 0, 00:31:11.274 "transports": [ 00:31:11.274 { 00:31:11.274 "trtype": "TCP" 00:31:11.274 } 00:31:11.274 ] 00:31:11.274 } 00:31:11.274 ] 00:31:11.274 }' 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:31:11.274 13:59:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1524255 00:31:19.393 Initializing NVMe Controllers 00:31:19.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:19.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:19.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:19.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:19.393 Initialization complete. Launching workers. 00:31:19.393 ======================================================== 00:31:19.393 Latency(us) 00:31:19.393 Device Information : IOPS MiB/s Average min max 00:31:19.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3751.50 14.65 17064.19 3034.18 62800.46 00:31:19.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4513.00 17.63 14182.56 1808.95 62721.16 00:31:19.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14232.60 55.60 4496.44 1590.61 8127.38 00:31:19.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3956.80 15.46 16180.07 1868.96 64104.06 00:31:19.393 ======================================================== 00:31:19.393 Total : 26453.89 103.34 9678.71 1590.61 64104.06 00:31:19.393 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.393 rmmod nvme_tcp 00:31:19.393 rmmod nvme_fabrics 00:31:19.393 rmmod nvme_keyring 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1523964 ']' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1523964 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1523964 ']' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1523964 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1523964 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1523964' 00:31:19.393 killing process with pid 1523964 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1523964 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1523964 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.393 13:59:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.754 13:59:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.754 13:59:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:22.754 00:31:22.754 real 0m55.173s 00:31:22.754 user 2m47.040s 00:31:22.754 sys 0m15.965s 00:31:22.754 13:59:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:22.754 13:59:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.754 ************************************ 00:31:22.754 END TEST nvmf_perf_adq 00:31:22.754 ************************************ 00:31:22.754 13:59:36 nvmf_tcp -- nvmf/nvmf.sh@84 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:22.754 13:59:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:22.754 13:59:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:22.754 13:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.754 ************************************ 00:31:22.754 START TEST nvmf_shutdown 00:31:22.754 ************************************ 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:22.754 * Looking for test storage... 00:31:22.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.754 13:59:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:22.755 13:59:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:22.755 ************************************ 00:31:22.755 START TEST nvmf_shutdown_tc1 00:31:22.755 ************************************ 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.755 13:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.741 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:32.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:32.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:32.742 Found net devices under 0000:af:00.0: cvl_0_0 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:32.742 Found net devices under 0000:af:00.1: cvl_0_1 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:31:32.742 00:31:32.742 --- 10.0.0.2 ping statistics --- 00:31:32.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.742 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:31:32.742 00:31:32.742 --- 10.0.0.1 ping statistics --- 00:31:32.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.742 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:31:32.742 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1530654 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1530654 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1530654 ']' 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:32.743 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.743 [2024-06-10 13:59:45.855325] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:32.743 [2024-06-10 13:59:45.855385] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.743 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.743 [2024-06-10 13:59:45.972144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.743 [2024-06-10 13:59:46.059270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.743 [2024-06-10 13:59:46.059313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.743 [2024-06-10 13:59:46.059326] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.743 [2024-06-10 13:59:46.059338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.743 [2024-06-10 13:59:46.059348] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.743 [2024-06-10 13:59:46.059453] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.743 [2024-06-10 13:59:46.059563] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.743 [2024-06-10 13:59:46.059675] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.743 [2024-06-10 13:59:46.059675] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.743 [2024-06-10 13:59:46.814946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:32.743 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.743 Malloc1 00:31:32.743 [2024-06-10 13:59:46.931095] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.743 Malloc2 00:31:32.743 Malloc3 00:31:32.743 Malloc4 00:31:32.743 Malloc5 00:31:32.743 Malloc6 00:31:32.743 Malloc7 00:31:33.003 Malloc8 00:31:33.003 Malloc9 00:31:33.003 Malloc10 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1530964 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1530964 /var/tmp/bdevperf.sock 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1530964 ']' 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:33.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.003 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.003 { 00:31:33.003 "params": { 00:31:33.003 "name": "Nvme$subsystem", 00:31:33.003 "trtype": "$TEST_TRANSPORT", 00:31:33.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.003 "adrfam": "ipv4", 00:31:33.003 "trsvcid": "$NVMF_PORT", 00:31:33.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 [2024-06-10 13:59:47.427186] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:33.004 [2024-06-10 13:59:47.427248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.004 "ddgst": ${ddgst:-false} 00:31:33.004 }, 00:31:33.004 "method": "bdev_nvme_attach_controller" 00:31:33.004 } 00:31:33.004 EOF 00:31:33.004 )") 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.004 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.004 { 00:31:33.004 "params": { 00:31:33.004 "name": "Nvme$subsystem", 00:31:33.004 "trtype": "$TEST_TRANSPORT", 00:31:33.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.004 "adrfam": "ipv4", 00:31:33.004 "trsvcid": "$NVMF_PORT", 00:31:33.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.004 "hdgst": ${hdgst:-false}, 00:31:33.005 "ddgst": ${ddgst:-false} 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 } 00:31:33.005 EOF 00:31:33.005 )") 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.005 { 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme$subsystem", 00:31:33.005 "trtype": "$TEST_TRANSPORT", 00:31:33.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "$NVMF_PORT", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.005 "hdgst": ${hdgst:-false}, 00:31:33.005 "ddgst": ${ddgst:-false} 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 } 00:31:33.005 EOF 00:31:33.005 )") 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.005 { 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme$subsystem", 00:31:33.005 "trtype": "$TEST_TRANSPORT", 00:31:33.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "$NVMF_PORT", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.005 "hdgst": ${hdgst:-false}, 00:31:33.005 "ddgst": ${ddgst:-false} 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 } 00:31:33.005 EOF 00:31:33.005 )") 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:33.005 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme1", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme2", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme3", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme4", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme5", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme6", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme7", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.005 "name": "Nvme8", 00:31:33.005 "trtype": "tcp", 00:31:33.005 "traddr": "10.0.0.2", 00:31:33.005 "adrfam": "ipv4", 00:31:33.005 "trsvcid": "4420", 00:31:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:33.005 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:33.005 "hdgst": false, 00:31:33.005 "ddgst": false 00:31:33.005 }, 00:31:33.005 "method": "bdev_nvme_attach_controller" 00:31:33.005 },{ 00:31:33.005 "params": { 00:31:33.006 "name": "Nvme9", 00:31:33.006 "trtype": "tcp", 00:31:33.006 "traddr": "10.0.0.2", 00:31:33.006 "adrfam": "ipv4", 00:31:33.006 "trsvcid": "4420", 00:31:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:33.006 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:33.006 "hdgst": false, 00:31:33.006 "ddgst": false 00:31:33.006 }, 00:31:33.006 "method": "bdev_nvme_attach_controller" 00:31:33.006 },{ 00:31:33.006 "params": { 00:31:33.006 "name": "Nvme10", 00:31:33.006 "trtype": "tcp", 00:31:33.006 "traddr": "10.0.0.2", 00:31:33.006 "adrfam": "ipv4", 00:31:33.006 "trsvcid": "4420", 00:31:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:33.006 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:33.006 "hdgst": false, 00:31:33.006 "ddgst": false 00:31:33.006 }, 00:31:33.006 "method": "bdev_nvme_attach_controller" 00:31:33.006 }' 00:31:33.265 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.265 [2024-06-10 13:59:47.551923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.265 [2024-06-10 13:59:47.634866] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1530964 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:31:34.643 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:31:35.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1530964 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1530654 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.580 { 00:31:35.580 "params": { 00:31:35.580 "name": "Nvme$subsystem", 00:31:35.580 "trtype": "$TEST_TRANSPORT", 00:31:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.580 "adrfam": "ipv4", 00:31:35.580 "trsvcid": "$NVMF_PORT", 00:31:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.580 "hdgst": ${hdgst:-false}, 00:31:35.580 "ddgst": ${ddgst:-false} 00:31:35.580 }, 00:31:35.580 "method": "bdev_nvme_attach_controller" 00:31:35.580 } 00:31:35.580 EOF 00:31:35.580 )") 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.580 { 00:31:35.580 "params": { 00:31:35.580 "name": "Nvme$subsystem", 00:31:35.580 "trtype": "$TEST_TRANSPORT", 00:31:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.580 "adrfam": "ipv4", 00:31:35.580 "trsvcid": "$NVMF_PORT", 00:31:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.580 "hdgst": ${hdgst:-false}, 00:31:35.580 "ddgst": ${ddgst:-false} 00:31:35.580 }, 00:31:35.580 "method": "bdev_nvme_attach_controller" 00:31:35.580 } 00:31:35.580 EOF 00:31:35.580 )") 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.580 { 00:31:35.580 "params": { 00:31:35.580 "name": "Nvme$subsystem", 00:31:35.580 "trtype": "$TEST_TRANSPORT", 00:31:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.580 "adrfam": "ipv4", 00:31:35.580 "trsvcid": "$NVMF_PORT", 00:31:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.580 "hdgst": ${hdgst:-false}, 00:31:35.580 "ddgst": ${ddgst:-false} 00:31:35.580 }, 00:31:35.580 "method": "bdev_nvme_attach_controller" 00:31:35.580 } 00:31:35.580 EOF 00:31:35.580 )") 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.580 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.580 { 00:31:35.580 "params": { 00:31:35.580 "name": "Nvme$subsystem", 00:31:35.580 "trtype": "$TEST_TRANSPORT", 00:31:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.580 "adrfam": "ipv4", 00:31:35.580 "trsvcid": "$NVMF_PORT", 00:31:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.580 "hdgst": ${hdgst:-false}, 00:31:35.580 "ddgst": ${ddgst:-false} 00:31:35.580 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.581 { 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme$subsystem", 00:31:35.581 "trtype": "$TEST_TRANSPORT", 00:31:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "$NVMF_PORT", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.581 "hdgst": ${hdgst:-false}, 00:31:35.581 "ddgst": ${ddgst:-false} 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.581 { 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme$subsystem", 00:31:35.581 "trtype": "$TEST_TRANSPORT", 00:31:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "$NVMF_PORT", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.581 "hdgst": ${hdgst:-false}, 00:31:35.581 "ddgst": ${ddgst:-false} 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.581 [2024-06-10 13:59:49.898005] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:35.581 [2024-06-10 13:59:49.898073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531411 ] 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.581 { 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme$subsystem", 00:31:35.581 "trtype": "$TEST_TRANSPORT", 00:31:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "$NVMF_PORT", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.581 "hdgst": ${hdgst:-false}, 00:31:35.581 "ddgst": ${ddgst:-false} 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.581 { 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme$subsystem", 00:31:35.581 "trtype": "$TEST_TRANSPORT", 00:31:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "$NVMF_PORT", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.581 "hdgst": ${hdgst:-false}, 00:31:35.581 "ddgst": ${ddgst:-false} 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.581 { 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme$subsystem", 00:31:35.581 "trtype": "$TEST_TRANSPORT", 00:31:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "$NVMF_PORT", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.581 "hdgst": ${hdgst:-false}, 00:31:35.581 "ddgst": ${ddgst:-false} 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.581 { 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme$subsystem", 00:31:35.581 "trtype": "$TEST_TRANSPORT", 00:31:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "$NVMF_PORT", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.581 "hdgst": ${hdgst:-false}, 00:31:35.581 "ddgst": ${ddgst:-false} 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 } 00:31:35.581 EOF 00:31:35.581 )") 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:35.581 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme1", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme2", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme3", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme4", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme5", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme6", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme7", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:35.581 "hdgst": false, 00:31:35.581 "ddgst": false 00:31:35.581 }, 00:31:35.581 "method": "bdev_nvme_attach_controller" 00:31:35.581 },{ 00:31:35.581 "params": { 00:31:35.581 "name": "Nvme8", 00:31:35.581 "trtype": "tcp", 00:31:35.581 "traddr": "10.0.0.2", 00:31:35.581 "adrfam": "ipv4", 00:31:35.581 "trsvcid": "4420", 00:31:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:35.581 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:35.582 "hdgst": false, 00:31:35.582 "ddgst": false 00:31:35.582 }, 00:31:35.582 "method": "bdev_nvme_attach_controller" 00:31:35.582 },{ 00:31:35.582 "params": { 00:31:35.582 "name": "Nvme9", 00:31:35.582 "trtype": "tcp", 00:31:35.582 "traddr": "10.0.0.2", 00:31:35.582 "adrfam": "ipv4", 00:31:35.582 "trsvcid": "4420", 00:31:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:35.582 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:35.582 "hdgst": false, 00:31:35.582 "ddgst": false 00:31:35.582 }, 00:31:35.582 "method": "bdev_nvme_attach_controller" 00:31:35.582 },{ 00:31:35.582 "params": { 00:31:35.582 "name": "Nvme10", 00:31:35.582 "trtype": "tcp", 00:31:35.582 "traddr": "10.0.0.2", 00:31:35.582 "adrfam": "ipv4", 00:31:35.582 "trsvcid": "4420", 00:31:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:35.582 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:35.582 "hdgst": false, 00:31:35.582 "ddgst": false 00:31:35.582 }, 00:31:35.582 "method": "bdev_nvme_attach_controller" 00:31:35.582 }' 00:31:35.582 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.582 [2024-06-10 13:59:50.021387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.840 [2024-06-10 13:59:50.113234] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.218 Running I/O for 1 seconds... 00:31:38.595 00:31:38.595 Latency(us) 00:31:38.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.595 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.595 Verification LBA range: start 0x0 length 0x400 00:31:38.595 Nvme1n1 : 1.11 173.14 10.82 0.00 0.00 365613.06 24746.39 286890.39 00:31:38.595 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.595 Verification LBA range: start 0x0 length 0x400 00:31:38.595 Nvme2n1 : 1.13 226.94 14.18 0.00 0.00 273699.23 21600.67 255013.68 00:31:38.596 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme3n1 : 1.12 227.77 14.24 0.00 0.00 267209.93 26738.69 270113.18 00:31:38.596 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme4n1 : 1.13 226.43 14.15 0.00 0.00 263997.44 23278.39 275146.34 00:31:38.596 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme5n1 : 1.14 225.43 14.09 0.00 0.00 259856.79 21705.52 249980.52 00:31:38.596 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme6n1 : 1.12 171.59 10.72 0.00 0.00 334124.10 25690.11 317089.38 00:31:38.596 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme7n1 : 1.14 224.28 14.02 0.00 0.00 251039.54 24222.11 276824.06 00:31:38.596 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme8n1 : 1.16 229.40 14.34 0.00 0.00 239462.02 3696.23 280179.51 00:31:38.596 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme9n1 : 1.22 261.37 16.34 0.00 0.00 208984.06 8598.32 258369.13 00:31:38.596 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.596 Verification LBA range: start 0x0 length 0x400 00:31:38.596 Nvme10n1 : 1.23 260.25 16.27 0.00 0.00 205906.17 10800.33 280179.51 00:31:38.596 =================================================================================================================== 00:31:38.596 Total : 2226.60 139.16 0.00 0.00 259796.57 3696.23 317089.38 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:38.596 rmmod nvme_tcp 00:31:38.596 rmmod nvme_fabrics 00:31:38.596 rmmod nvme_keyring 00:31:38.596 13:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1530654 ']' 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1530654 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 1530654 ']' 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 1530654 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1530654 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1530654' 00:31:38.596 killing process with pid 1530654 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 1530654 00:31:38.596 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 1530654 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.165 13:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:41.703 00:31:41.703 real 0m18.534s 00:31:41.703 user 0m35.725s 00:31:41.703 sys 0m8.458s 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:41.703 ************************************ 00:31:41.703 END TEST nvmf_shutdown_tc1 00:31:41.703 ************************************ 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:41.703 ************************************ 00:31:41.703 START TEST nvmf_shutdown_tc2 00:31:41.703 ************************************ 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.703 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:41.704 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:41.704 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:41.704 Found net devices under 0000:af:00.0: cvl_0_0 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:41.704 Found net devices under 0000:af:00.1: cvl_0_1 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:41.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:31:41.704 00:31:41.704 --- 10.0.0.2 ping statistics --- 00:31:41.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.704 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:31:41.704 00:31:41.704 --- 10.0.0.1 ping statistics --- 00:31:41.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.704 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:41.704 13:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1532459 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1532459 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1532459 ']' 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:41.704 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.704 [2024-06-10 13:59:56.067951] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:41.704 [2024-06-10 13:59:56.068013] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.704 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.964 [2024-06-10 13:59:56.184640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:41.964 [2024-06-10 13:59:56.271955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.964 [2024-06-10 13:59:56.272002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.964 [2024-06-10 13:59:56.272017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.964 [2024-06-10 13:59:56.272031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.964 [2024-06-10 13:59:56.272041] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.964 [2024-06-10 13:59:56.272145] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:31:41.964 [2024-06-10 13:59:56.272260] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:31:41.964 [2024-06-10 13:59:56.272296] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.964 [2024-06-10 13:59:56.272296] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:31:42.531 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:42.531 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:42.531 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:42.531 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:42.531 13:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.790 [2024-06-10 13:59:57.015879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:42.790 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:42.791 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.791 Malloc1 00:31:42.791 [2024-06-10 13:59:57.128010] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.791 Malloc2 00:31:42.791 Malloc3 00:31:42.791 Malloc4 00:31:43.050 Malloc5 00:31:43.050 Malloc6 00:31:43.050 Malloc7 00:31:43.050 Malloc8 00:31:43.050 Malloc9 00:31:43.050 Malloc10 00:31:43.308 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1532757 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1532757 /var/tmp/bdevperf.sock 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1532757 ']' 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:43.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.309 { 00:31:43.309 "params": { 00:31:43.309 "name": "Nvme$subsystem", 00:31:43.309 "trtype": "$TEST_TRANSPORT", 00:31:43.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.309 "adrfam": "ipv4", 00:31:43.309 "trsvcid": "$NVMF_PORT", 00:31:43.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.309 "hdgst": ${hdgst:-false}, 00:31:43.309 "ddgst": ${ddgst:-false} 00:31:43.309 }, 00:31:43.309 "method": "bdev_nvme_attach_controller" 00:31:43.309 } 00:31:43.309 EOF 00:31:43.309 )") 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.309 { 00:31:43.309 "params": { 00:31:43.309 "name": "Nvme$subsystem", 00:31:43.309 "trtype": "$TEST_TRANSPORT", 00:31:43.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.309 "adrfam": "ipv4", 00:31:43.309 "trsvcid": "$NVMF_PORT", 00:31:43.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.309 "hdgst": ${hdgst:-false}, 00:31:43.309 "ddgst": ${ddgst:-false} 00:31:43.309 }, 00:31:43.309 "method": "bdev_nvme_attach_controller" 00:31:43.309 } 00:31:43.309 EOF 00:31:43.309 )") 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.309 { 00:31:43.309 "params": { 00:31:43.309 "name": "Nvme$subsystem", 00:31:43.309 "trtype": "$TEST_TRANSPORT", 00:31:43.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.309 "adrfam": "ipv4", 00:31:43.309 "trsvcid": "$NVMF_PORT", 00:31:43.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.309 "hdgst": ${hdgst:-false}, 00:31:43.309 "ddgst": ${ddgst:-false} 00:31:43.309 }, 00:31:43.309 "method": "bdev_nvme_attach_controller" 00:31:43.309 } 00:31:43.309 EOF 00:31:43.309 )") 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.309 { 00:31:43.309 "params": { 00:31:43.309 "name": "Nvme$subsystem", 00:31:43.309 "trtype": "$TEST_TRANSPORT", 00:31:43.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.309 "adrfam": "ipv4", 00:31:43.309 "trsvcid": "$NVMF_PORT", 00:31:43.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.309 "hdgst": ${hdgst:-false}, 00:31:43.309 "ddgst": ${ddgst:-false} 00:31:43.309 }, 00:31:43.309 "method": "bdev_nvme_attach_controller" 00:31:43.309 } 00:31:43.309 EOF 00:31:43.309 )") 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.309 { 00:31:43.309 "params": { 00:31:43.309 "name": "Nvme$subsystem", 00:31:43.309 "trtype": "$TEST_TRANSPORT", 00:31:43.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.309 "adrfam": "ipv4", 00:31:43.309 "trsvcid": "$NVMF_PORT", 00:31:43.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.309 "hdgst": ${hdgst:-false}, 00:31:43.309 "ddgst": ${ddgst:-false} 00:31:43.309 }, 00:31:43.309 "method": "bdev_nvme_attach_controller" 00:31:43.309 } 00:31:43.309 EOF 00:31:43.309 )") 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.309 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.309 { 00:31:43.309 "params": { 00:31:43.309 "name": "Nvme$subsystem", 00:31:43.309 "trtype": "$TEST_TRANSPORT", 00:31:43.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.309 "adrfam": "ipv4", 00:31:43.309 "trsvcid": "$NVMF_PORT", 00:31:43.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.309 "hdgst": ${hdgst:-false}, 00:31:43.309 "ddgst": ${ddgst:-false} 00:31:43.309 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 } 00:31:43.310 EOF 00:31:43.310 )") 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.310 [2024-06-10 13:59:57.626670] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:43.310 [2024-06-10 13:59:57.626736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532757 ] 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.310 { 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme$subsystem", 00:31:43.310 "trtype": "$TEST_TRANSPORT", 00:31:43.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "$NVMF_PORT", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.310 "hdgst": ${hdgst:-false}, 00:31:43.310 "ddgst": ${ddgst:-false} 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 } 00:31:43.310 EOF 00:31:43.310 )") 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.310 { 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme$subsystem", 00:31:43.310 "trtype": "$TEST_TRANSPORT", 00:31:43.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "$NVMF_PORT", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.310 "hdgst": ${hdgst:-false}, 00:31:43.310 "ddgst": ${ddgst:-false} 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 } 00:31:43.310 EOF 00:31:43.310 )") 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.310 { 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme$subsystem", 00:31:43.310 "trtype": "$TEST_TRANSPORT", 00:31:43.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "$NVMF_PORT", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.310 "hdgst": ${hdgst:-false}, 00:31:43.310 "ddgst": ${ddgst:-false} 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 } 00:31:43.310 EOF 00:31:43.310 )") 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:43.310 { 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme$subsystem", 00:31:43.310 "trtype": "$TEST_TRANSPORT", 00:31:43.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "$NVMF_PORT", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.310 "hdgst": ${hdgst:-false}, 00:31:43.310 "ddgst": ${ddgst:-false} 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 } 00:31:43.310 EOF 00:31:43.310 )") 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:31:43.310 13:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme1", 00:31:43.310 "trtype": "tcp", 00:31:43.310 "traddr": "10.0.0.2", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "4420", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.310 "hdgst": false, 00:31:43.310 "ddgst": false 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 },{ 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme2", 00:31:43.310 "trtype": "tcp", 00:31:43.310 "traddr": "10.0.0.2", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "4420", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:43.310 "hdgst": false, 00:31:43.310 "ddgst": false 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 },{ 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme3", 00:31:43.310 "trtype": "tcp", 00:31:43.310 "traddr": "10.0.0.2", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "4420", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:43.310 "hdgst": false, 00:31:43.310 "ddgst": false 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 },{ 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme4", 00:31:43.310 "trtype": "tcp", 00:31:43.310 "traddr": "10.0.0.2", 00:31:43.310 "adrfam": "ipv4", 00:31:43.310 "trsvcid": "4420", 00:31:43.310 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:43.310 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:43.310 "hdgst": false, 00:31:43.310 "ddgst": false 00:31:43.310 }, 00:31:43.310 "method": "bdev_nvme_attach_controller" 00:31:43.310 },{ 00:31:43.310 "params": { 00:31:43.310 "name": "Nvme5", 00:31:43.310 "trtype": "tcp", 00:31:43.311 "traddr": "10.0.0.2", 00:31:43.311 "adrfam": "ipv4", 00:31:43.311 "trsvcid": "4420", 00:31:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:43.311 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:43.311 "hdgst": false, 00:31:43.311 "ddgst": false 00:31:43.311 }, 00:31:43.311 "method": "bdev_nvme_attach_controller" 00:31:43.311 },{ 00:31:43.311 "params": { 00:31:43.311 "name": "Nvme6", 00:31:43.311 "trtype": "tcp", 00:31:43.311 "traddr": "10.0.0.2", 00:31:43.311 "adrfam": "ipv4", 00:31:43.311 "trsvcid": "4420", 00:31:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:43.311 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:43.311 "hdgst": false, 00:31:43.311 "ddgst": false 00:31:43.311 }, 00:31:43.311 "method": "bdev_nvme_attach_controller" 00:31:43.311 },{ 00:31:43.311 "params": { 00:31:43.311 "name": "Nvme7", 00:31:43.311 "trtype": "tcp", 00:31:43.311 "traddr": "10.0.0.2", 00:31:43.311 "adrfam": "ipv4", 00:31:43.311 "trsvcid": "4420", 00:31:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:43.311 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:43.311 "hdgst": false, 00:31:43.311 "ddgst": false 00:31:43.311 }, 00:31:43.311 "method": "bdev_nvme_attach_controller" 00:31:43.311 },{ 00:31:43.311 "params": { 00:31:43.311 "name": "Nvme8", 00:31:43.311 "trtype": "tcp", 00:31:43.311 "traddr": "10.0.0.2", 00:31:43.311 "adrfam": "ipv4", 00:31:43.311 "trsvcid": "4420", 00:31:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:43.311 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:43.311 "hdgst": false, 00:31:43.311 "ddgst": false 00:31:43.311 }, 00:31:43.311 "method": "bdev_nvme_attach_controller" 00:31:43.311 },{ 00:31:43.311 "params": { 00:31:43.311 "name": "Nvme9", 00:31:43.311 "trtype": "tcp", 00:31:43.311 "traddr": "10.0.0.2", 00:31:43.311 "adrfam": "ipv4", 00:31:43.311 "trsvcid": "4420", 00:31:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:43.311 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:43.311 "hdgst": false, 00:31:43.311 "ddgst": false 00:31:43.311 }, 00:31:43.311 "method": "bdev_nvme_attach_controller" 00:31:43.311 },{ 00:31:43.311 "params": { 00:31:43.311 "name": "Nvme10", 00:31:43.311 "trtype": "tcp", 00:31:43.311 "traddr": "10.0.0.2", 00:31:43.311 "adrfam": "ipv4", 00:31:43.311 "trsvcid": "4420", 00:31:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:43.311 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:43.311 "hdgst": false, 00:31:43.311 "ddgst": false 00:31:43.311 }, 00:31:43.311 "method": "bdev_nvme_attach_controller" 00:31:43.311 }' 00:31:43.311 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.311 [2024-06-10 13:59:57.749929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.569 [2024-06-10 13:59:57.833395] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.945 Running I/O for 10 seconds... 00:31:44.945 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:44.945 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:44.945 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:44.945 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:44.945 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:31:45.204 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:45.463 13:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.722 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1532757 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1532757 ']' 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1532757 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1532757 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1532757' 00:31:45.981 killing process with pid 1532757 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1532757 00:31:45.981 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1532757 00:31:45.981 Received shutdown signal, test time was about 1.045654 seconds 00:31:45.981 00:31:45.981 Latency(us) 00:31:45.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme1n1 : 1.01 254.45 15.90 0.00 0.00 248191.18 23383.24 275146.34 00:31:45.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme2n1 : 1.04 245.03 15.31 0.00 0.00 243494.91 21915.24 268435.46 00:31:45.981 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme3n1 : 1.00 255.68 15.98 0.00 0.00 236732.83 21076.38 270113.18 00:31:45.981 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme4n1 : 0.96 199.18 12.45 0.00 0.00 296218.90 27472.69 260046.85 00:31:45.981 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme5n1 : 0.97 198.48 12.40 0.00 0.00 290273.69 29150.41 248302.80 00:31:45.981 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme6n1 : 0.99 193.93 12.12 0.00 0.00 291146.96 27472.69 280179.51 00:31:45.981 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme7n1 : 0.97 197.25 12.33 0.00 0.00 278824.82 22754.10 246625.08 00:31:45.981 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme8n1 : 0.98 196.80 12.30 0.00 0.00 272673.45 24956.11 296956.72 00:31:45.981 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme9n1 : 0.99 194.56 12.16 0.00 0.00 269345.86 19608.37 281857.23 00:31:45.981 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.981 Verification LBA range: start 0x0 length 0x400 00:31:45.981 Nvme10n1 : 1.00 192.51 12.03 0.00 0.00 265970.76 24851.25 312056.22 00:31:45.981 =================================================================================================================== 00:31:45.981 Total : 2127.88 132.99 0.00 0.00 266879.97 19608.37 312056.22 00:31:46.240 14:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1532459 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:47.178 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:47.178 rmmod nvme_tcp 00:31:47.434 rmmod nvme_fabrics 00:31:47.434 rmmod nvme_keyring 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1532459 ']' 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1532459 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1532459 ']' 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1532459 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1532459 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1532459' 00:31:47.434 killing process with pid 1532459 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1532459 00:31:47.434 14:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1532459 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.999 14:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:49.901 00:31:49.901 real 0m8.624s 00:31:49.901 user 0m26.332s 00:31:49.901 sys 0m1.813s 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.901 ************************************ 00:31:49.901 END TEST nvmf_shutdown_tc2 00:31:49.901 ************************************ 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:49.901 ************************************ 00:31:49.901 START TEST nvmf_shutdown_tc3 00:31:49.901 ************************************ 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.901 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:49.902 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:49.902 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:49.902 Found net devices under 0000:af:00.0: cvl_0_0 00:31:49.902 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:50.161 Found net devices under 0000:af:00.1: cvl_0_1 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:50.161 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:50.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:31:50.420 00:31:50.420 --- 10.0.0.2 ping statistics --- 00:31:50.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.420 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:31:50.420 00:31:50.420 --- 10.0.0.1 ping statistics --- 00:31:50.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.420 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1534509 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1534509 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1534509 ']' 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:50.420 14:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:50.420 [2024-06-10 14:00:04.797269] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:50.420 [2024-06-10 14:00:04.797334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.420 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.678 [2024-06-10 14:00:04.914916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.678 [2024-06-10 14:00:05.001349] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.678 [2024-06-10 14:00:05.001394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.678 [2024-06-10 14:00:05.001407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.678 [2024-06-10 14:00:05.001419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.678 [2024-06-10 14:00:05.001428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.678 [2024-06-10 14:00:05.001532] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.678 [2024-06-10 14:00:05.001658] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.678 [2024-06-10 14:00:05.001767] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:31:50.678 [2024-06-10 14:00:05.001767] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.244 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:51.244 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:31:51.244 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:51.244 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:51.244 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:51.502 [2024-06-10 14:00:05.755968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.502 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.503 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:51.503 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:51.503 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:51.503 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.503 14:00:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 Malloc1 00:31:51.503 [2024-06-10 14:00:05.872130] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.503 Malloc2 00:31:51.503 Malloc3 00:31:51.760 Malloc4 00:31:51.760 Malloc5 00:31:51.760 Malloc6 00:31:51.760 Malloc7 00:31:51.760 Malloc8 00:31:51.760 Malloc9 00:31:52.019 Malloc10 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1535067 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1535067 /var/tmp/bdevperf.sock 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1535067 ']' 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:52.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.019 { 00:31:52.019 "params": { 00:31:52.019 "name": "Nvme$subsystem", 00:31:52.019 "trtype": "$TEST_TRANSPORT", 00:31:52.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.019 "adrfam": "ipv4", 00:31:52.019 "trsvcid": "$NVMF_PORT", 00:31:52.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.019 "hdgst": ${hdgst:-false}, 00:31:52.019 "ddgst": ${ddgst:-false} 00:31:52.019 }, 00:31:52.019 "method": "bdev_nvme_attach_controller" 00:31:52.019 } 00:31:52.019 EOF 00:31:52.019 )") 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.019 { 00:31:52.019 "params": { 00:31:52.019 "name": "Nvme$subsystem", 00:31:52.019 "trtype": "$TEST_TRANSPORT", 00:31:52.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.019 "adrfam": "ipv4", 00:31:52.019 "trsvcid": "$NVMF_PORT", 00:31:52.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.019 "hdgst": ${hdgst:-false}, 00:31:52.019 "ddgst": ${ddgst:-false} 00:31:52.019 }, 00:31:52.019 "method": "bdev_nvme_attach_controller" 00:31:52.019 } 00:31:52.019 EOF 00:31:52.019 )") 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.019 { 00:31:52.019 "params": { 00:31:52.019 "name": "Nvme$subsystem", 00:31:52.019 "trtype": "$TEST_TRANSPORT", 00:31:52.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.019 "adrfam": "ipv4", 00:31:52.019 "trsvcid": "$NVMF_PORT", 00:31:52.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.019 "hdgst": ${hdgst:-false}, 00:31:52.019 "ddgst": ${ddgst:-false} 00:31:52.019 }, 00:31:52.019 "method": "bdev_nvme_attach_controller" 00:31:52.019 } 00:31:52.019 EOF 00:31:52.019 )") 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.019 { 00:31:52.019 "params": { 00:31:52.019 "name": "Nvme$subsystem", 00:31:52.019 "trtype": "$TEST_TRANSPORT", 00:31:52.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.019 "adrfam": "ipv4", 00:31:52.019 "trsvcid": "$NVMF_PORT", 00:31:52.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.019 "hdgst": ${hdgst:-false}, 00:31:52.019 "ddgst": ${ddgst:-false} 00:31:52.019 }, 00:31:52.019 "method": "bdev_nvme_attach_controller" 00:31:52.019 } 00:31:52.019 EOF 00:31:52.019 )") 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.019 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.019 { 00:31:52.019 "params": { 00:31:52.019 "name": "Nvme$subsystem", 00:31:52.019 "trtype": "$TEST_TRANSPORT", 00:31:52.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.019 "adrfam": "ipv4", 00:31:52.019 "trsvcid": "$NVMF_PORT", 00:31:52.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.020 "hdgst": ${hdgst:-false}, 00:31:52.020 "ddgst": ${ddgst:-false} 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 } 00:31:52.020 EOF 00:31:52.020 )") 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.020 { 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme$subsystem", 00:31:52.020 "trtype": "$TEST_TRANSPORT", 00:31:52.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "$NVMF_PORT", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.020 "hdgst": ${hdgst:-false}, 00:31:52.020 "ddgst": ${ddgst:-false} 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 } 00:31:52.020 EOF 00:31:52.020 )") 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.020 [2024-06-10 14:00:06.366974] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:31:52.020 [2024-06-10 14:00:06.367038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535067 ] 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.020 { 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme$subsystem", 00:31:52.020 "trtype": "$TEST_TRANSPORT", 00:31:52.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "$NVMF_PORT", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.020 "hdgst": ${hdgst:-false}, 00:31:52.020 "ddgst": ${ddgst:-false} 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 } 00:31:52.020 EOF 00:31:52.020 )") 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.020 { 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme$subsystem", 00:31:52.020 "trtype": "$TEST_TRANSPORT", 00:31:52.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "$NVMF_PORT", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.020 "hdgst": ${hdgst:-false}, 00:31:52.020 "ddgst": ${ddgst:-false} 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 } 00:31:52.020 EOF 00:31:52.020 )") 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.020 { 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme$subsystem", 00:31:52.020 "trtype": "$TEST_TRANSPORT", 00:31:52.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "$NVMF_PORT", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.020 "hdgst": ${hdgst:-false}, 00:31:52.020 "ddgst": ${ddgst:-false} 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 } 00:31:52.020 EOF 00:31:52.020 )") 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.020 { 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme$subsystem", 00:31:52.020 "trtype": "$TEST_TRANSPORT", 00:31:52.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "$NVMF_PORT", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.020 "hdgst": ${hdgst:-false}, 00:31:52.020 "ddgst": ${ddgst:-false} 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 } 00:31:52.020 EOF 00:31:52.020 )") 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:31:52.020 14:00:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme1", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme2", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme3", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme4", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme5", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme6", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme7", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme8", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme9", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.020 "adrfam": "ipv4", 00:31:52.020 "trsvcid": "4420", 00:31:52.020 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:52.020 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:52.020 "hdgst": false, 00:31:52.020 "ddgst": false 00:31:52.020 }, 00:31:52.020 "method": "bdev_nvme_attach_controller" 00:31:52.020 },{ 00:31:52.020 "params": { 00:31:52.020 "name": "Nvme10", 00:31:52.020 "trtype": "tcp", 00:31:52.020 "traddr": "10.0.0.2", 00:31:52.021 "adrfam": "ipv4", 00:31:52.021 "trsvcid": "4420", 00:31:52.021 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:52.021 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:52.021 "hdgst": false, 00:31:52.021 "ddgst": false 00:31:52.021 }, 00:31:52.021 "method": "bdev_nvme_attach_controller" 00:31:52.021 }' 00:31:52.021 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.021 [2024-06-10 14:00:06.489930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.279 [2024-06-10 14:00:06.572278] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.659 Running I/O for 10 seconds... 00:31:53.659 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:53.659 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:31:53.659 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:53.659 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.659 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:31:53.919 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:54.178 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:54.178 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:54.178 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:54.178 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:54.178 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.178 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:54.437 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.437 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:54.437 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:54.437 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1534509 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1534509 ']' 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1534509 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:54.724 14:00:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1534509 00:31:54.724 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:54.724 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:54.724 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1534509' 00:31:54.724 killing process with pid 1534509 00:31:54.724 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 1534509 00:31:54.724 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 1534509 00:31:54.724 [2024-06-10 14:00:09.056282] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056331] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056341] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056351] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056360] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056368] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056377] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056385] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056394] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056403] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056412] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056421] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056429] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056437] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056446] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056454] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056462] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056471] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056480] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056492] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056501] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056510] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056518] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056527] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056535] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056544] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056552] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056561] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056570] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056584] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056593] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056601] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056610] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056618] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056636] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056645] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.724 [2024-06-10 14:00:09.056655] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056663] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056672] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056681] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056689] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056698] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056706] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056715] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056724] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056734] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056743] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056751] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056760] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056768] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056777] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056785] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056794] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056802] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056810] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056819] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056827] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056836] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056845] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056854] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056862] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056871] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.056879] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1327ea0 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.058605] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150ba40 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.058643] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150ba40 is same with the state(5) to be set 00:31:54.725 [2024-06-10 14:00:09.060062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.725 [2024-06-10 14:00:09.060780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.725 [2024-06-10 14:00:09.060793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.060978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.060970] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.060991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061003] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061018] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061031] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061043] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061056] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12[2024-06-10 14:00:09.061068] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.061082] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061097] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061109] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061121] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061133] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061146] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061159] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with [2024-06-10 14:00:09.061160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12the state(5) to be set 00:31:54.726 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061172] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061184] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061196] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061209] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061224] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061237] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:12[2024-06-10 14:00:09.061249] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.061264] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061278] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061291] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061304] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061317] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061329] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:12[2024-06-10 14:00:09.061342] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.061356] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061371] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061383] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.726 [2024-06-10 14:00:09.061396] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.726 [2024-06-10 14:00:09.061402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.726 [2024-06-10 14:00:09.061409] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061422] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061434] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061446] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061459] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with [2024-06-10 14:00:09.061459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:12the state(5) to be set 00:31:54.727 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061474] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061486] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with [2024-06-10 14:00:09.061491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:12the state(5) to be set 00:31:54.727 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061504] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with [2024-06-10 14:00:09.061506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:54.727 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061518] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061530] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061543] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061555] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061568] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061590] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061603] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:12[2024-06-10 14:00:09.061615] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061629] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with [2024-06-10 14:00:09.061629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:54.727 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061643] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061656] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061669] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061681] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061694] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:12[2024-06-10 14:00:09.061708] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.061724] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061739] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061750] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061763] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061775] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061787] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:12[2024-06-10 14:00:09.061800] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.061814] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13287e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 the state(5) to be set 00:31:54.727 [2024-06-10 14:00:09.061830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.061945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.061958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.062389] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29796f0 was disconnected and freed. reset controller. 00:31:54.727 [2024-06-10 14:00:09.062553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.062571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.062596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.062625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.062637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.727 [2024-06-10 14:00:09.062652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.727 [2024-06-10 14:00:09.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.062973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.062987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063087] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1328ca0 is same with [2024-06-10 14:00:09.063102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1the state(5) to be set 00:31:54.728 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063120] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1328ca0 is same with the state(5) to be set 00:31:54.728 [2024-06-10 14:00:09.063131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.728 [2024-06-10 14:00:09.063623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.728 [2024-06-10 14:00:09.063637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063861] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063879] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-06-10 14:00:09.063888] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063899] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063908] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063917] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with [2024-06-10 14:00:09.063916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1the state(5) to be set 00:31:54.729 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063928] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063937] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063947] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with [2024-06-10 14:00:09.063946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(5) to be set 00:31:54.729 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063959] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063968] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063977] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.063986] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.063991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.063995] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064005] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.064013] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.064026] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064035] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.064044] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.064053] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064064] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.064072] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064081] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with [2024-06-10 14:00:09.064080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:54.729 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.064092] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 [2024-06-10 14:00:09.064101] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064111] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.729 [2024-06-10 14:00:09.064120] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-06-10 14:00:09.064130] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.729 the state(5) to be set 00:31:54.729 [2024-06-10 14:00:09.064140] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.064149] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-06-10 14:00:09.064158] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064170] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with [2024-06-10 14:00:09.064171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:54.730 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.064182] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064192] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064201] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.064211] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128[2024-06-10 14:00:09.064220] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064231] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.064240] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064249] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with [2024-06-10 14:00:09.064247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(5) to be set 00:31:54.730 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064259] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.064268] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064278] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064287] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.064296] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064308] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064317] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.064326] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064339] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064348] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.064357] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064368] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064378] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.064387] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064399] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064408] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064416] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064424] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064433] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064441] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064450] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064458] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064464] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2a7d7d0 was disconnected and fr[2024-06-10 14:00:09.064467] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329140 is same with eed. reset controller. 00:31:54.730 the state(5) to be set 00:31:54.730 [2024-06-10 14:00:09.064945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.064971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.064989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.730 [2024-06-10 14:00:09.065284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.730 [2024-06-10 14:00:09.065298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065454] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065483] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-06-10 14:00:09.065497] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.065510] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065524] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065537] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065550] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065563] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065579] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065592] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065605] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065623] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065636] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065649] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065663] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-06-10 14:00:09.065675] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.065691] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065706] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065718] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065731] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065743] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065756] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065769] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065782] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-06-10 14:00:09.065794] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.065810] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065825] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065837] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065850] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065863] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065875] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 [2024-06-10 14:00:09.065888] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.731 [2024-06-10 14:00:09.065901] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-06-10 14:00:09.065915] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.731 the state(5) to be set 00:31:54.731 [2024-06-10 14:00:09.065929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.065929] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.065945] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.065946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.065957] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.065960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.065970] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.065976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.065983] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.065991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.065995] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-06-10 14:00:09.066008] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.066021] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066036] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with [2024-06-10 14:00:09.066037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1the state(5) to be set 00:31:54.732 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066050] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with [2024-06-10 14:00:09.066051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:31:54.732 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066064] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066076] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066089] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066101] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066114] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066127] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with [2024-06-10 14:00:09.066126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(5) to be set 00:31:54.732 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066140] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066152] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066165] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066177] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-06-10 14:00:09.066190] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:00:09.066203] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066218] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066230] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066243] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066255] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066274] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066287] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066299] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329600 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.066306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.066319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.732 [2024-06-10 14:00:09.066334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.732 [2024-06-10 14:00:09.067397] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329aa0 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.067423] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329aa0 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.067435] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329aa0 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068182] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068204] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068216] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068228] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068240] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068252] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068264] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068276] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068288] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068300] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068312] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068324] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068336] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068348] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068359] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068371] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068383] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.732 [2024-06-10 14:00:09.068394] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068406] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068417] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068429] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068440] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068452] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068463] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068475] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068487] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068499] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068515] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068526] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068538] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068550] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068562] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068574] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068591] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068603] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068615] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068639] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068651] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068663] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068674] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068686] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068697] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068709] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068721] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068733] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068744] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068756] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068768] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068780] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068791] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068803] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068815] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068827] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068841] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068853] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068865] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068877] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068889] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068900] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068912] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068924] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.068936] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329f40 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069645] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069661] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069674] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069683] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069691] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069700] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069709] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069717] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069725] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069734] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069742] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069751] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069759] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069768] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069778] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069787] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069795] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069805] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.733 [2024-06-10 14:00:09.069816] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069825] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069833] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069842] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069850] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069859] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069867] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069876] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069885] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069893] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069902] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069910] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069919] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069927] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069936] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069944] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069953] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069962] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069971] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069980] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069988] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.069996] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070005] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070014] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070022] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070032] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070041] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070051] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070060] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070068] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070077] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070085] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070094] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070102] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070110] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070119] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070127] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070135] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070144] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070152] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070161] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070209] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070266] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070321] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.070377] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a3e0 is same with the state(5) to be set 00:31:54.734 [2024-06-10 14:00:09.080213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.734 [2024-06-10 14:00:09.080520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.734 [2024-06-10 14:00:09.080537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.080550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.080566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.080603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.080634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.080650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.080663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.080679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.080693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.080709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.080723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.080764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:54.735 [2024-06-10 14:00:09.080831] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2a38780 was disconnected and freed. reset controller. 00:31:54.735 [2024-06-10 14:00:09.081304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.081974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.081990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.082004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.082020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.082033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.082049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.735 [2024-06-10 14:00:09.082065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.735 [2024-06-10 14:00:09.082081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.082962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.082981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.736 [2024-06-10 14:00:09.083463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.736 [2024-06-10 14:00:09.083485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.737 [2024-06-10 14:00:09.083503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.083526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.737 [2024-06-10 14:00:09.083544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.083569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.737 [2024-06-10 14:00:09.083597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.083619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.737 [2024-06-10 14:00:09.083638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.083683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:54.737 [2024-06-10 14:00:09.083757] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28cad70 was disconnected and freed. reset controller. 00:31:54.737 [2024-06-10 14:00:09.085729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:54.737 [2024-06-10 14:00:09.085807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b4870 (9): Bad file descriptor 00:31:54.737 [2024-06-10 14:00:09.085884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.085907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.085927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.085946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.085965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.085983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29a0a00 is same with the state(5) to be set 00:31:54.737 [2024-06-10 14:00:09.086092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b53c0 is same with the state(5) to be set 00:31:54.737 [2024-06-10 14:00:09.086300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298ed80 is same with the state(5) to be set 00:31:54.737 [2024-06-10 14:00:09.086502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cf6b0 is same with the state(5) to be set 00:31:54.737 [2024-06-10 14:00:09.086703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a8bd70 is same with the state(5) to be set 00:31:54.737 [2024-06-10 14:00:09.086900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.086977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.086996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.087015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.087034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.087051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28fcd10 is same with the state(5) to be set 00:31:54.737 [2024-06-10 14:00:09.087098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.087119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.087139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.087157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.087176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.087214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.737 [2024-06-10 14:00:09.087233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.737 [2024-06-10 14:00:09.087250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d1820 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.087295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a998a0 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.087498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.738 [2024-06-10 14:00:09.087639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.738 [2024-06-10 14:00:09.087657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4610 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.092913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:54.738 [2024-06-10 14:00:09.092949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:54.738 [2024-06-10 14:00:09.092970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a8bd70 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.092989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cf6b0 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.093685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:54.738 [2024-06-10 14:00:09.093721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4610 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.094008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-06-10 14:00:09.094028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b4870 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-06-10 14:00:09.094042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b4870 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.094118] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:54.738 [2024-06-10 14:00:09.095501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-06-10 14:00:09.095529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28cf6b0 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-06-10 14:00:09.095543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cf6b0 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.095747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-06-10 14:00:09.095764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a8bd70 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-06-10 14:00:09.095777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a8bd70 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.095808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b4870 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.095894] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:54.738 [2024-06-10 14:00:09.095961] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:54.738 [2024-06-10 14:00:09.096018] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:54.738 [2024-06-10 14:00:09.096083] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:54.738 [2024-06-10 14:00:09.096139] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:54.738 [2024-06-10 14:00:09.096356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-06-10 14:00:09.096375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4610 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-06-10 14:00:09.096388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4610 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.096405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cf6b0 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a8bd70 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:54.738 [2024-06-10 14:00:09.096448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:54.738 [2024-06-10 14:00:09.096462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:54.738 [2024-06-10 14:00:09.096484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29a0a00 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b53c0 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298ed80 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28fcd10 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d1820 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a998a0 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.738 [2024-06-10 14:00:09.096776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4610 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.096791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:54.738 [2024-06-10 14:00:09.096803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:54.738 [2024-06-10 14:00:09.096815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:54.738 [2024-06-10 14:00:09.096831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:54.738 [2024-06-10 14:00:09.096842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:54.738 [2024-06-10 14:00:09.096854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:54.738 [2024-06-10 14:00:09.096908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.738 [2024-06-10 14:00:09.096919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.738 [2024-06-10 14:00:09.096931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:54.738 [2024-06-10 14:00:09.096946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:54.738 [2024-06-10 14:00:09.096958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:54.738 [2024-06-10 14:00:09.097008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.738 [2024-06-10 14:00:09.103146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:54.738 [2024-06-10 14:00:09.103503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.738 [2024-06-10 14:00:09.103522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b4870 with addr=10.0.0.2, port=4420 00:31:54.738 [2024-06-10 14:00:09.103535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b4870 is same with the state(5) to be set 00:31:54.738 [2024-06-10 14:00:09.103592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b4870 (9): Bad file descriptor 00:31:54.738 [2024-06-10 14:00:09.103641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:54.738 [2024-06-10 14:00:09.103654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:54.738 [2024-06-10 14:00:09.103667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:54.738 [2024-06-10 14:00:09.103718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.738 [2024-06-10 14:00:09.104193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:54.739 [2024-06-10 14:00:09.104210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:54.739 [2024-06-10 14:00:09.104491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.739 [2024-06-10 14:00:09.104510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a8bd70 with addr=10.0.0.2, port=4420 00:31:54.739 [2024-06-10 14:00:09.104522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a8bd70 is same with the state(5) to be set 00:31:54.739 [2024-06-10 14:00:09.104750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.739 [2024-06-10 14:00:09.104767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28cf6b0 with addr=10.0.0.2, port=4420 00:31:54.739 [2024-06-10 14:00:09.104780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cf6b0 is same with the state(5) to be set 00:31:54.739 [2024-06-10 14:00:09.104829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a8bd70 (9): Bad file descriptor 00:31:54.739 [2024-06-10 14:00:09.104845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cf6b0 (9): Bad file descriptor 00:31:54.739 [2024-06-10 14:00:09.104893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:54.739 [2024-06-10 14:00:09.104906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:54.739 [2024-06-10 14:00:09.104919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:54.739 [2024-06-10 14:00:09.104935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:54.739 [2024-06-10 14:00:09.104947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:54.739 [2024-06-10 14:00:09.104958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:54.739 [2024-06-10 14:00:09.105008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.739 [2024-06-10 14:00:09.105020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.739 [2024-06-10 14:00:09.105939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:54.739 [2024-06-10 14:00:09.106305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.739 [2024-06-10 14:00:09.106324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4610 with addr=10.0.0.2, port=4420 00:31:54.739 [2024-06-10 14:00:09.106337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4610 is same with the state(5) to be set 00:31:54.739 [2024-06-10 14:00:09.106386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4610 (9): Bad file descriptor 00:31:54.739 [2024-06-10 14:00:09.106498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:54.739 [2024-06-10 14:00:09.106512] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:54.739 [2024-06-10 14:00:09.106524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:54.739 [2024-06-10 14:00:09.106594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.106978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.739 [2024-06-10 14:00:09.106993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.739 [2024-06-10 14:00:09.107006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.740 [2024-06-10 14:00:09.107928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.740 [2024-06-10 14:00:09.107943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.107955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.107970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.107982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.107997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.108380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.108394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2981050 is same with the state(5) to be set 00:31:54.741 [2024-06-10 14:00:09.109738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.109972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.109986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.741 [2024-06-10 14:00:09.110255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.741 [2024-06-10 14:00:09.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.110977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.110989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.742 [2024-06-10 14:00:09.111307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.742 [2024-06-10 14:00:09.111322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.111564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.111589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a37260 is same with the state(5) to be set 00:31:54.743 [2024-06-10 14:00:09.112930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.112951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.112969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.112982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.112997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.743 [2024-06-10 14:00:09.113736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.743 [2024-06-10 14:00:09.113751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.113984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.113999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.744 [2024-06-10 14:00:09.114642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.744 [2024-06-10 14:00:09.114654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.114669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.114682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.114697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.114709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.114723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a39c10 is same with the state(5) to be set 00:31:54.745 [2024-06-10 14:00:09.116047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.745 [2024-06-10 14:00:09.116944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.745 [2024-06-10 14:00:09.116958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.116972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.116986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.116999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.117837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.117851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28c98a0 is same with the state(5) to be set 00:31:54.746 [2024-06-10 14:00:09.119182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.119202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.119219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.119232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.119250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.746 [2024-06-10 14:00:09.119263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.746 [2024-06-10 14:00:09.119278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.119985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.119997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.747 [2024-06-10 14:00:09.120330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.747 [2024-06-10 14:00:09.120345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.120976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.120989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cc270 is same with the state(5) to be set 00:31:54.748 [2024-06-10 14:00:09.122318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.748 [2024-06-10 14:00:09.122666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.748 [2024-06-10 14:00:09.122682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.122976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.749 [2024-06-10 14:00:09.123517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.749 [2024-06-10 14:00:09.123531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.123979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.123992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.124007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.124020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.124035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.124047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.124062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.124077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.124092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-06-10 14:00:09.124104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.750 [2024-06-10 14:00:09.124118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cd640 is same with the state(5) to be set 00:31:54.750 [2024-06-10 14:00:09.126248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.750 [2024-06-10 14:00:09.126274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:54.750 [2024-06-10 14:00:09.126292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:54.750 [2024-06-10 14:00:09.126307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:54.750 [2024-06-10 14:00:09.126404] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.750 [2024-06-10 14:00:09.126424] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.750 [2024-06-10 14:00:09.126441] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.750 [2024-06-10 14:00:09.126523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:54.750 [2024-06-10 14:00:09.126539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:54.750 task offset: 24320 on job bdev=Nvme10n1 fails 00:31:54.750 00:31:54.750 Latency(us) 00:31:54.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.750 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme1n1 ended in about 1.00 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme1n1 : 1.00 127.79 7.99 63.89 0.00 330061.96 24222.11 288568.12 00:31:54.750 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme2n1 ended in about 0.98 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme2n1 : 0.98 195.69 12.23 65.23 0.00 237129.73 27053.26 268435.46 00:31:54.750 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme3n1 ended in about 1.00 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme3n1 : 1.00 127.38 7.96 63.69 0.00 317323.95 24431.82 295279.00 00:31:54.750 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme4n1 ended in about 0.98 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme4n1 : 0.98 195.33 12.21 65.11 0.00 227171.74 21390.95 273468.62 00:31:54.750 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme5n1 ended in about 1.01 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme5n1 : 1.01 126.99 7.94 63.49 0.00 304369.94 35441.87 273468.62 00:31:54.750 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme6n1 ended in about 1.01 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme6n1 : 1.01 126.60 7.91 63.30 0.00 298630.89 20027.80 273468.62 00:31:54.750 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme7n1 ended in about 0.98 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme7n1 : 0.98 194.96 12.19 64.99 0.00 212126.00 10538.19 270113.18 00:31:54.750 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme8n1 ended in about 1.01 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme8n1 : 1.01 131.13 8.20 63.10 0.00 278629.80 25165.82 253335.96 00:31:54.750 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme9n1 ended in about 1.02 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme9n1 : 1.02 125.82 7.86 62.91 0.00 279896.88 23173.53 275146.34 00:31:54.750 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.750 Job: Nvme10n1 ended in about 0.98 seconds with error 00:31:54.750 Verification LBA range: start 0x0 length 0x400 00:31:54.750 Nvme10n1 : 0.98 130.96 8.19 65.48 0.00 259679.57 22334.67 310378.50 00:31:54.751 =================================================================================================================== 00:31:54.751 Total : 1482.65 92.67 641.20 0.00 270065.39 10538.19 310378.50 00:31:54.751 [2024-06-10 14:00:09.155606] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:54.751 [2024-06-10 14:00:09.155651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:54.751 [2024-06-10 14:00:09.156067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.156090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d1820 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.156107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d1820 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.156408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.156425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28fcd10 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.156438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28fcd10 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.156636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.156654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a998a0 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.156666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a998a0 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.158416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:54.751 [2024-06-10 14:00:09.158438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:54.751 [2024-06-10 14:00:09.158452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:54.751 [2024-06-10 14:00:09.158740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.158762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x298ed80 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.158775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298ed80 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.158971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.158987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b53c0 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.159000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b53c0 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.159247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.159263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29a0a00 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.159276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29a0a00 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.159301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d1820 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.159320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28fcd10 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.159335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a998a0 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.159375] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.751 [2024-06-10 14:00:09.159398] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.751 [2024-06-10 14:00:09.159416] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.751 [2024-06-10 14:00:09.159434] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.751 [2024-06-10 14:00:09.159897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:54.751 [2024-06-10 14:00:09.160261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.160282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29b4870 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.160296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b4870 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.160499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.160516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28cf6b0 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.160529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cf6b0 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.160763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.160780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a8bd70 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.160793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a8bd70 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.160808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298ed80 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.160824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b53c0 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.160839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29a0a00 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.160854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.160866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.160880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:54.751 [2024-06-10 14:00:09.160897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.160909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.160921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:54.751 [2024-06-10 14:00:09.160936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.160947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.160959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.751 [2024-06-10 14:00:09.161427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4610 with addr=10.0.0.2, port=4420 00:31:54.751 [2024-06-10 14:00:09.161440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4610 is same with the state(5) to be set 00:31:54.751 [2024-06-10 14:00:09.161454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29b4870 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.161469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cf6b0 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.161485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a8bd70 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.161499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.161510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.161522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.161548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.161560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.161593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.161605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4610 (9): Bad file descriptor 00:31:54.751 [2024-06-10 14:00:09.161690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.161702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.161714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.161740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.161751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:54.751 [2024-06-10 14:00:09.161779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:54.751 [2024-06-10 14:00:09.161794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:54.751 [2024-06-10 14:00:09.161828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.751 [2024-06-10 14:00:09.161840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.752 [2024-06-10 14:00:09.161850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:54.752 [2024-06-10 14:00:09.161861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:54.752 [2024-06-10 14:00:09.161872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:54.752 [2024-06-10 14:00:09.161884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:54.752 [2024-06-10 14:00:09.161921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.352 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:31:55.352 14:00:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1535067 00:31:56.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1535067) - No such process 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.292 rmmod nvme_tcp 00:31:56.292 rmmod nvme_fabrics 00:31:56.292 rmmod nvme_keyring 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.292 14:00:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.828 14:00:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:58.828 00:31:58.828 real 0m8.353s 00:31:58.828 user 0m20.797s 00:31:58.828 sys 0m1.798s 00:31:58.828 14:00:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:58.828 14:00:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:58.828 ************************************ 00:31:58.828 END TEST nvmf_shutdown_tc3 00:31:58.828 ************************************ 00:31:58.828 14:00:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:31:58.828 00:31:58.828 real 0m35.916s 00:31:58.828 user 1m23.007s 00:31:58.828 sys 0m12.356s 00:31:58.828 14:00:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:58.828 14:00:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:58.828 ************************************ 00:31:58.828 END TEST nvmf_shutdown 00:31:58.828 ************************************ 00:31:58.828 14:00:12 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_exit target 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.828 14:00:12 nvmf_tcp -- nvmf/nvmf.sh@89 -- # timing_enter host 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.828 14:00:12 nvmf_tcp -- nvmf/nvmf.sh@91 -- # [[ 0 -eq 0 ]] 00:31:58.828 14:00:12 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:58.828 14:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.828 ************************************ 00:31:58.828 START TEST nvmf_multicontroller 00:31:58.828 ************************************ 00:31:58.828 14:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:58.828 * Looking for test storage... 00:31:58.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.828 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.828 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:58.828 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:31:58.829 14:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:06.950 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:06.950 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:06.950 Found net devices under 0000:af:00.0: cvl_0_0 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:06.950 Found net devices under 0000:af:00.1: cvl_0_1 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.950 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:07.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:32:07.210 00:32:07.210 --- 10.0.0.2 ping statistics --- 00:32:07.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.210 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:32:07.210 00:32:07.210 --- 10.0.0.1 ping statistics --- 00:32:07.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.210 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.210 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1540129 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1540129 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1540129 ']' 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:07.470 14:00:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.470 [2024-06-10 14:00:21.790668] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:07.470 [2024-06-10 14:00:21.790732] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.470 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.470 [2024-06-10 14:00:21.908211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:07.737 [2024-06-10 14:00:21.995059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.737 [2024-06-10 14:00:21.995102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.737 [2024-06-10 14:00:21.995116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.737 [2024-06-10 14:00:21.995128] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.737 [2024-06-10 14:00:21.995138] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.737 [2024-06-10 14:00:21.995248] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.737 [2024-06-10 14:00:21.995366] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.737 [2024-06-10 14:00:21.995367] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.308 [2024-06-10 14:00:22.749330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.308 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 Malloc0 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 [2024-06-10 14:00:22.813810] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 [2024-06-10 14:00:22.821737] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 Malloc1 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1540399 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1540399 /var/tmp/bdevperf.sock 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1540399 ']' 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:08.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:08.577 14:00:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.516 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:09.516 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:32:09.516 14:00:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:32:09.516 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.516 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.775 NVMe0n1 00:32:09.775 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.775 14:00:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:09.775 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.775 14:00:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:32:09.775 14:00:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.775 1 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.775 request: 00:32:09.775 { 00:32:09.775 "name": "NVMe0", 00:32:09.775 "trtype": "tcp", 00:32:09.775 "traddr": "10.0.0.2", 00:32:09.775 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:32:09.775 "hostaddr": "10.0.0.2", 00:32:09.775 "hostsvcid": "60000", 00:32:09.775 "adrfam": "ipv4", 00:32:09.775 "trsvcid": "4420", 00:32:09.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.775 "method": "bdev_nvme_attach_controller", 00:32:09.775 "req_id": 1 00:32:09.775 } 00:32:09.775 Got JSON-RPC error response 00:32:09.775 response: 00:32:09.775 { 00:32:09.775 "code": -114, 00:32:09.775 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:32:09.775 } 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.775 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.775 request: 00:32:09.775 { 00:32:09.775 "name": "NVMe0", 00:32:09.775 "trtype": "tcp", 00:32:09.775 "traddr": "10.0.0.2", 00:32:09.776 "hostaddr": "10.0.0.2", 00:32:09.776 "hostsvcid": "60000", 00:32:09.776 "adrfam": "ipv4", 00:32:09.776 "trsvcid": "4420", 00:32:09.776 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:09.776 "method": "bdev_nvme_attach_controller", 00:32:09.776 "req_id": 1 00:32:09.776 } 00:32:09.776 Got JSON-RPC error response 00:32:09.776 response: 00:32:09.776 { 00:32:09.776 "code": -114, 00:32:09.776 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:32:09.776 } 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.776 request: 00:32:09.776 { 00:32:09.776 "name": "NVMe0", 00:32:09.776 "trtype": "tcp", 00:32:09.776 "traddr": "10.0.0.2", 00:32:09.776 "hostaddr": "10.0.0.2", 00:32:09.776 "hostsvcid": "60000", 00:32:09.776 "adrfam": "ipv4", 00:32:09.776 "trsvcid": "4420", 00:32:09.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.776 "multipath": "disable", 00:32:09.776 "method": "bdev_nvme_attach_controller", 00:32:09.776 "req_id": 1 00:32:09.776 } 00:32:09.776 Got JSON-RPC error response 00:32:09.776 response: 00:32:09.776 { 00:32:09.776 "code": -114, 00:32:09.776 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:32:09.776 } 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.776 request: 00:32:09.776 { 00:32:09.776 "name": "NVMe0", 00:32:09.776 "trtype": "tcp", 00:32:09.776 "traddr": "10.0.0.2", 00:32:09.776 "hostaddr": "10.0.0.2", 00:32:09.776 "hostsvcid": "60000", 00:32:09.776 "adrfam": "ipv4", 00:32:09.776 "trsvcid": "4420", 00:32:09.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.776 "multipath": "failover", 00:32:09.776 "method": "bdev_nvme_attach_controller", 00:32:09.776 "req_id": 1 00:32:09.776 } 00:32:09.776 Got JSON-RPC error response 00:32:09.776 response: 00:32:09.776 { 00:32:09.776 "code": -114, 00:32:09.776 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:32:09.776 } 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.776 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.776 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.035 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:32:10.035 14:00:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:11.413 0 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1540399 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1540399 ']' 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1540399 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1540399 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1540399' 00:32:11.413 killing process with pid 1540399 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1540399 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1540399 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.413 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:32:11.414 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:32:11.672 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:11.673 [2024-06-10 14:00:22.925054] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:11.673 [2024-06-10 14:00:22.925108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540399 ] 00:32:11.673 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.673 [2024-06-10 14:00:23.030556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.673 [2024-06-10 14:00:23.118593] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.673 [2024-06-10 14:00:24.410476] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name cd6d18eb-ffbd-407e-8ab1-275373afce6a already exists 00:32:11.673 [2024-06-10 14:00:24.410511] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:cd6d18eb-ffbd-407e-8ab1-275373afce6a alias for bdev NVMe1n1 00:32:11.673 [2024-06-10 14:00:24.410527] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:32:11.673 Running I/O for 1 seconds... 00:32:11.673 00:32:11.673 Latency(us) 00:32:11.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.673 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:11.673 NVMe0n1 : 1.01 18451.82 72.08 0.00 0.00 6917.37 4351.59 15623.78 00:32:11.673 =================================================================================================================== 00:32:11.673 Total : 18451.82 72.08 0.00 0.00 6917.37 4351.59 15623.78 00:32:11.673 Received shutdown signal, test time was about 1.000000 seconds 00:32:11.673 00:32:11.673 Latency(us) 00:32:11.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.673 =================================================================================================================== 00:32:11.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.673 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.673 rmmod nvme_tcp 00:32:11.673 rmmod nvme_fabrics 00:32:11.673 rmmod nvme_keyring 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1540129 ']' 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1540129 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1540129 ']' 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1540129 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:11.673 14:00:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1540129 00:32:11.673 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:11.673 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:11.673 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1540129' 00:32:11.673 killing process with pid 1540129 00:32:11.673 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1540129 00:32:11.673 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1540129 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.932 14:00:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.472 14:00:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:14.472 00:32:14.472 real 0m15.456s 00:32:14.472 user 0m18.252s 00:32:14.472 sys 0m7.835s 00:32:14.472 14:00:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:14.472 14:00:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:14.472 ************************************ 00:32:14.472 END TEST nvmf_multicontroller 00:32:14.472 ************************************ 00:32:14.472 14:00:28 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:14.472 14:00:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:14.472 14:00:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:14.472 14:00:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.472 ************************************ 00:32:14.472 START TEST nvmf_aer 00:32:14.472 ************************************ 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:14.472 * Looking for test storage... 00:32:14.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.472 14:00:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.473 14:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:22.590 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:22.590 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:22.590 Found net devices under 0000:af:00.0: cvl_0_0 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:22.590 Found net devices under 0000:af:00.1: cvl_0_1 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:22.590 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.591 14:00:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:22.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:32:22.591 00:32:22.591 --- 10.0.0.2 ping statistics --- 00:32:22.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.591 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:32:22.591 00:32:22.591 --- 10.0.0.1 ping statistics --- 00:32:22.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.591 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:22.591 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1545377 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1545377 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 1545377 ']' 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:22.851 14:00:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:22.851 [2024-06-10 14:00:37.133591] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:22.851 [2024-06-10 14:00:37.133650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.851 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.851 [2024-06-10 14:00:37.252205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:23.108 [2024-06-10 14:00:37.340552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.108 [2024-06-10 14:00:37.340601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.108 [2024-06-10 14:00:37.340615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.108 [2024-06-10 14:00:37.340627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.108 [2024-06-10 14:00:37.340637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.108 [2024-06-10 14:00:37.340829] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.108 [2024-06-10 14:00:37.340921] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.108 [2024-06-10 14:00:37.341035] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.108 [2024-06-10 14:00:37.341035] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.675 [2024-06-10 14:00:38.101851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.675 Malloc0 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.675 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.935 [2024-06-10 14:00:38.157915] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.935 [ 00:32:23.935 { 00:32:23.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:23.935 "subtype": "Discovery", 00:32:23.935 "listen_addresses": [], 00:32:23.935 "allow_any_host": true, 00:32:23.935 "hosts": [] 00:32:23.935 }, 00:32:23.935 { 00:32:23.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:23.935 "subtype": "NVMe", 00:32:23.935 "listen_addresses": [ 00:32:23.935 { 00:32:23.935 "trtype": "TCP", 00:32:23.935 "adrfam": "IPv4", 00:32:23.935 "traddr": "10.0.0.2", 00:32:23.935 "trsvcid": "4420" 00:32:23.935 } 00:32:23.935 ], 00:32:23.935 "allow_any_host": true, 00:32:23.935 "hosts": [], 00:32:23.935 "serial_number": "SPDK00000000000001", 00:32:23.935 "model_number": "SPDK bdev Controller", 00:32:23.935 "max_namespaces": 2, 00:32:23.935 "min_cntlid": 1, 00:32:23.935 "max_cntlid": 65519, 00:32:23.935 "namespaces": [ 00:32:23.935 { 00:32:23.935 "nsid": 1, 00:32:23.935 "bdev_name": "Malloc0", 00:32:23.935 "name": "Malloc0", 00:32:23.935 "nguid": "FA54767316B14FB68734FC852BA09F13", 00:32:23.935 "uuid": "fa547673-16b1-4fb6-8734-fc852ba09f13" 00:32:23.935 } 00:32:23.935 ] 00:32:23.935 } 00:32:23.935 ] 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1545589 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:32:23.935 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:32:23.935 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:24.195 Malloc1 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:24.195 [ 00:32:24.195 { 00:32:24.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:24.195 "subtype": "Discovery", 00:32:24.195 "listen_addresses": [], 00:32:24.195 "allow_any_host": true, 00:32:24.195 "hosts": [] 00:32:24.195 }, 00:32:24.195 { 00:32:24.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:24.195 "subtype": "NVMe", 00:32:24.195 "listen_addresses": [ 00:32:24.195 { 00:32:24.195 "trtype": "TCP", 00:32:24.195 "adrfam": "IPv4", 00:32:24.195 "traddr": "10.0.0.2", 00:32:24.195 "trsvcid": "4420" 00:32:24.195 } 00:32:24.195 ], 00:32:24.195 "allow_any_host": true, 00:32:24.195 "hosts": [], 00:32:24.195 "serial_number": "SPDK00000000000001", 00:32:24.195 "model_number": "SPDK bdev Controller", 00:32:24.195 "max_namespaces": 2, 00:32:24.195 "min_cntlid": 1, 00:32:24.195 "max_cntlid": 65519, 00:32:24.195 "namespaces": [ 00:32:24.195 { 00:32:24.195 Asynchronous Event Request test 00:32:24.195 Attaching to 10.0.0.2 00:32:24.195 Attached to 10.0.0.2 00:32:24.195 Registering asynchronous event callbacks... 00:32:24.195 Starting namespace attribute notice tests for all controllers... 00:32:24.195 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:24.195 aer_cb - Changed Namespace 00:32:24.195 Cleaning up... 00:32:24.195 "nsid": 1, 00:32:24.195 "bdev_name": "Malloc0", 00:32:24.195 "name": "Malloc0", 00:32:24.195 "nguid": "FA54767316B14FB68734FC852BA09F13", 00:32:24.195 "uuid": "fa547673-16b1-4fb6-8734-fc852ba09f13" 00:32:24.195 }, 00:32:24.195 { 00:32:24.195 "nsid": 2, 00:32:24.195 "bdev_name": "Malloc1", 00:32:24.195 "name": "Malloc1", 00:32:24.195 "nguid": "2E348C11F9D445E7B12A50A56CC7D335", 00:32:24.195 "uuid": "2e348c11-f9d4-45e7-b12a-50a56cc7d335" 00:32:24.195 } 00:32:24.195 ] 00:32:24.195 } 00:32:24.195 ] 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.195 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1545589 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:24.196 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:24.196 rmmod nvme_tcp 00:32:24.196 rmmod nvme_fabrics 00:32:24.458 rmmod nvme_keyring 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1545377 ']' 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1545377 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 1545377 ']' 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 1545377 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1545377 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1545377' 00:32:24.458 killing process with pid 1545377 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 1545377 00:32:24.458 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 1545377 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.716 14:00:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.636 14:00:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.636 00:32:26.636 real 0m12.602s 00:32:26.636 user 0m8.747s 00:32:26.636 sys 0m7.086s 00:32:26.636 14:00:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:26.636 14:00:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:26.636 ************************************ 00:32:26.636 END TEST nvmf_aer 00:32:26.636 ************************************ 00:32:26.636 14:00:41 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:26.636 14:00:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:26.636 14:00:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:26.636 14:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:26.895 ************************************ 00:32:26.895 START TEST nvmf_async_init 00:32:26.895 ************************************ 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:26.895 * Looking for test storage... 00:32:26.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:26.895 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ff92c69d153d46cfad7741f6e3595a0f 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:32:26.896 14:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.079 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.338 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.338 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:35.339 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:35.339 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:35.339 Found net devices under 0000:af:00.0: cvl_0_0 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:35.339 Found net devices under 0000:af:00.1: cvl_0_1 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:35.339 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.605 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.605 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.605 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:35.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:32:35.605 00:32:35.605 --- 10.0.0.2 ping statistics --- 00:32:35.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.605 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:32:35.605 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:32:35.606 00:32:35.606 --- 10.0.0.1 ping statistics --- 00:32:35.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.606 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1550103 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1550103 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 1550103 ']' 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:35.606 14:00:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:35.606 [2024-06-10 14:00:49.968021] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:35.606 [2024-06-10 14:00:49.968083] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.606 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.865 [2024-06-10 14:00:50.097041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.865 [2024-06-10 14:00:50.183622] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.865 [2024-06-10 14:00:50.183666] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.865 [2024-06-10 14:00:50.183680] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.865 [2024-06-10 14:00:50.183691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.865 [2024-06-10 14:00:50.183701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.865 [2024-06-10 14:00:50.183726] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.431 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:36.431 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:32:36.431 14:00:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:36.431 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:36.431 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.690 [2024-06-10 14:00:50.923612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.690 null0 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ff92c69d153d46cfad7741f6e3595a0f 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.690 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.691 [2024-06-10 14:00:50.971877] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.691 14:00:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.950 nvme0n1 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.950 [ 00:32:36.950 { 00:32:36.950 "name": "nvme0n1", 00:32:36.950 "aliases": [ 00:32:36.950 "ff92c69d-153d-46cf-ad77-41f6e3595a0f" 00:32:36.950 ], 00:32:36.950 "product_name": "NVMe disk", 00:32:36.950 "block_size": 512, 00:32:36.950 "num_blocks": 2097152, 00:32:36.950 "uuid": "ff92c69d-153d-46cf-ad77-41f6e3595a0f", 00:32:36.950 "assigned_rate_limits": { 00:32:36.950 "rw_ios_per_sec": 0, 00:32:36.950 "rw_mbytes_per_sec": 0, 00:32:36.950 "r_mbytes_per_sec": 0, 00:32:36.950 "w_mbytes_per_sec": 0 00:32:36.950 }, 00:32:36.950 "claimed": false, 00:32:36.950 "zoned": false, 00:32:36.950 "supported_io_types": { 00:32:36.950 "read": true, 00:32:36.950 "write": true, 00:32:36.950 "unmap": false, 00:32:36.950 "write_zeroes": true, 00:32:36.950 "flush": true, 00:32:36.950 "reset": true, 00:32:36.950 "compare": true, 00:32:36.950 "compare_and_write": true, 00:32:36.950 "abort": true, 00:32:36.950 "nvme_admin": true, 00:32:36.950 "nvme_io": true 00:32:36.950 }, 00:32:36.950 "memory_domains": [ 00:32:36.950 { 00:32:36.950 "dma_device_id": "system", 00:32:36.950 "dma_device_type": 1 00:32:36.950 } 00:32:36.950 ], 00:32:36.950 "driver_specific": { 00:32:36.950 "nvme": [ 00:32:36.950 { 00:32:36.950 "trid": { 00:32:36.950 "trtype": "TCP", 00:32:36.950 "adrfam": "IPv4", 00:32:36.950 "traddr": "10.0.0.2", 00:32:36.950 "trsvcid": "4420", 00:32:36.950 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:36.950 }, 00:32:36.950 "ctrlr_data": { 00:32:36.950 "cntlid": 1, 00:32:36.950 "vendor_id": "0x8086", 00:32:36.950 "model_number": "SPDK bdev Controller", 00:32:36.950 "serial_number": "00000000000000000000", 00:32:36.950 "firmware_revision": "24.09", 00:32:36.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.950 "oacs": { 00:32:36.950 "security": 0, 00:32:36.950 "format": 0, 00:32:36.950 "firmware": 0, 00:32:36.950 "ns_manage": 0 00:32:36.950 }, 00:32:36.950 "multi_ctrlr": true, 00:32:36.950 "ana_reporting": false 00:32:36.950 }, 00:32:36.950 "vs": { 00:32:36.950 "nvme_version": "1.3" 00:32:36.950 }, 00:32:36.950 "ns_data": { 00:32:36.950 "id": 1, 00:32:36.950 "can_share": true 00:32:36.950 } 00:32:36.950 } 00:32:36.950 ], 00:32:36.950 "mp_policy": "active_passive" 00:32:36.950 } 00:32:36.950 } 00:32:36.950 ] 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.950 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.951 [2024-06-10 14:00:51.244452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.951 [2024-06-10 14:00:51.244523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158ba20 (9): Bad file descriptor 00:32:36.951 [2024-06-10 14:00:51.376700] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:36.951 [ 00:32:36.951 { 00:32:36.951 "name": "nvme0n1", 00:32:36.951 "aliases": [ 00:32:36.951 "ff92c69d-153d-46cf-ad77-41f6e3595a0f" 00:32:36.951 ], 00:32:36.951 "product_name": "NVMe disk", 00:32:36.951 "block_size": 512, 00:32:36.951 "num_blocks": 2097152, 00:32:36.951 "uuid": "ff92c69d-153d-46cf-ad77-41f6e3595a0f", 00:32:36.951 "assigned_rate_limits": { 00:32:36.951 "rw_ios_per_sec": 0, 00:32:36.951 "rw_mbytes_per_sec": 0, 00:32:36.951 "r_mbytes_per_sec": 0, 00:32:36.951 "w_mbytes_per_sec": 0 00:32:36.951 }, 00:32:36.951 "claimed": false, 00:32:36.951 "zoned": false, 00:32:36.951 "supported_io_types": { 00:32:36.951 "read": true, 00:32:36.951 "write": true, 00:32:36.951 "unmap": false, 00:32:36.951 "write_zeroes": true, 00:32:36.951 "flush": true, 00:32:36.951 "reset": true, 00:32:36.951 "compare": true, 00:32:36.951 "compare_and_write": true, 00:32:36.951 "abort": true, 00:32:36.951 "nvme_admin": true, 00:32:36.951 "nvme_io": true 00:32:36.951 }, 00:32:36.951 "memory_domains": [ 00:32:36.951 { 00:32:36.951 "dma_device_id": "system", 00:32:36.951 "dma_device_type": 1 00:32:36.951 } 00:32:36.951 ], 00:32:36.951 "driver_specific": { 00:32:36.951 "nvme": [ 00:32:36.951 { 00:32:36.951 "trid": { 00:32:36.951 "trtype": "TCP", 00:32:36.951 "adrfam": "IPv4", 00:32:36.951 "traddr": "10.0.0.2", 00:32:36.951 "trsvcid": "4420", 00:32:36.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:36.951 }, 00:32:36.951 "ctrlr_data": { 00:32:36.951 "cntlid": 2, 00:32:36.951 "vendor_id": "0x8086", 00:32:36.951 "model_number": "SPDK bdev Controller", 00:32:36.951 "serial_number": "00000000000000000000", 00:32:36.951 "firmware_revision": "24.09", 00:32:36.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.951 "oacs": { 00:32:36.951 "security": 0, 00:32:36.951 "format": 0, 00:32:36.951 "firmware": 0, 00:32:36.951 "ns_manage": 0 00:32:36.951 }, 00:32:36.951 "multi_ctrlr": true, 00:32:36.951 "ana_reporting": false 00:32:36.951 }, 00:32:36.951 "vs": { 00:32:36.951 "nvme_version": "1.3" 00:32:36.951 }, 00:32:36.951 "ns_data": { 00:32:36.951 "id": 1, 00:32:36.951 "can_share": true 00:32:36.951 } 00:32:36.951 } 00:32:36.951 ], 00:32:36.951 "mp_policy": "active_passive" 00:32:36.951 } 00:32:36.951 } 00:32:36.951 ] 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.951 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pLekMkc1GU 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pLekMkc1GU 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 [2024-06-10 14:00:51.449132] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:37.211 [2024-06-10 14:00:51.449295] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pLekMkc1GU 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 [2024-06-10 14:00:51.457150] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pLekMkc1GU 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 [2024-06-10 14:00:51.469184] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:37.211 [2024-06-10 14:00:51.469232] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:37.211 nvme0n1 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 [ 00:32:37.211 { 00:32:37.211 "name": "nvme0n1", 00:32:37.211 "aliases": [ 00:32:37.211 "ff92c69d-153d-46cf-ad77-41f6e3595a0f" 00:32:37.211 ], 00:32:37.211 "product_name": "NVMe disk", 00:32:37.211 "block_size": 512, 00:32:37.211 "num_blocks": 2097152, 00:32:37.211 "uuid": "ff92c69d-153d-46cf-ad77-41f6e3595a0f", 00:32:37.211 "assigned_rate_limits": { 00:32:37.211 "rw_ios_per_sec": 0, 00:32:37.211 "rw_mbytes_per_sec": 0, 00:32:37.211 "r_mbytes_per_sec": 0, 00:32:37.211 "w_mbytes_per_sec": 0 00:32:37.211 }, 00:32:37.211 "claimed": false, 00:32:37.211 "zoned": false, 00:32:37.211 "supported_io_types": { 00:32:37.211 "read": true, 00:32:37.211 "write": true, 00:32:37.211 "unmap": false, 00:32:37.211 "write_zeroes": true, 00:32:37.211 "flush": true, 00:32:37.211 "reset": true, 00:32:37.211 "compare": true, 00:32:37.211 "compare_and_write": true, 00:32:37.211 "abort": true, 00:32:37.211 "nvme_admin": true, 00:32:37.211 "nvme_io": true 00:32:37.211 }, 00:32:37.211 "memory_domains": [ 00:32:37.211 { 00:32:37.211 "dma_device_id": "system", 00:32:37.211 "dma_device_type": 1 00:32:37.211 } 00:32:37.211 ], 00:32:37.211 "driver_specific": { 00:32:37.211 "nvme": [ 00:32:37.211 { 00:32:37.211 "trid": { 00:32:37.211 "trtype": "TCP", 00:32:37.211 "adrfam": "IPv4", 00:32:37.211 "traddr": "10.0.0.2", 00:32:37.211 "trsvcid": "4421", 00:32:37.211 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:37.211 }, 00:32:37.211 "ctrlr_data": { 00:32:37.211 "cntlid": 3, 00:32:37.211 "vendor_id": "0x8086", 00:32:37.211 "model_number": "SPDK bdev Controller", 00:32:37.211 "serial_number": "00000000000000000000", 00:32:37.211 "firmware_revision": "24.09", 00:32:37.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.211 "oacs": { 00:32:37.211 "security": 0, 00:32:37.211 "format": 0, 00:32:37.211 "firmware": 0, 00:32:37.211 "ns_manage": 0 00:32:37.211 }, 00:32:37.211 "multi_ctrlr": true, 00:32:37.211 "ana_reporting": false 00:32:37.211 }, 00:32:37.211 "vs": { 00:32:37.211 "nvme_version": "1.3" 00:32:37.211 }, 00:32:37.211 "ns_data": { 00:32:37.211 "id": 1, 00:32:37.211 "can_share": true 00:32:37.211 } 00:32:37.211 } 00:32:37.211 ], 00:32:37.211 "mp_policy": "active_passive" 00:32:37.211 } 00:32:37.211 } 00:32:37.211 ] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pLekMkc1GU 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:37.211 rmmod nvme_tcp 00:32:37.211 rmmod nvme_fabrics 00:32:37.211 rmmod nvme_keyring 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1550103 ']' 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1550103 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 1550103 ']' 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 1550103 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:37.211 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1550103 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1550103' 00:32:37.471 killing process with pid 1550103 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 1550103 00:32:37.471 [2024-06-10 14:00:51.710547] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:37.471 [2024-06-10 14:00:51.710583] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 1550103 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.471 14:00:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.005 14:00:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:40.005 00:32:40.005 real 0m12.848s 00:32:40.005 user 0m4.529s 00:32:40.005 sys 0m7.122s 00:32:40.005 14:00:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:40.005 14:00:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:40.005 ************************************ 00:32:40.005 END TEST nvmf_async_init 00:32:40.005 ************************************ 00:32:40.005 14:00:54 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:40.005 14:00:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:40.005 14:00:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:40.005 14:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.005 ************************************ 00:32:40.005 START TEST dma 00:32:40.005 ************************************ 00:32:40.005 14:00:54 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:40.005 * Looking for test storage... 00:32:40.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:40.005 14:00:54 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.005 14:00:54 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.005 14:00:54 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.005 14:00:54 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.005 14:00:54 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:32:40.005 14:00:54 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:40.005 14:00:54 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:40.005 14:00:54 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:32:40.005 14:00:54 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:32:40.005 00:32:40.005 real 0m0.144s 00:32:40.005 user 0m0.053s 00:32:40.005 sys 0m0.101s 00:32:40.005 14:00:54 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:40.005 14:00:54 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:32:40.005 ************************************ 00:32:40.005 END TEST dma 00:32:40.005 ************************************ 00:32:40.005 14:00:54 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:40.005 14:00:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:40.005 14:00:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:40.005 14:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.005 ************************************ 00:32:40.005 START TEST nvmf_identify 00:32:40.005 ************************************ 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:40.005 * Looking for test storage... 00:32:40.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:32:40.005 14:00:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:49.991 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:49.991 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:49.991 Found net devices under 0000:af:00.0: cvl_0_0 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:49.991 Found net devices under 0000:af:00.1: cvl_0_1 00:32:49.991 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.992 14:01:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:49.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:32:49.992 00:32:49.992 --- 10.0.0.2 ping statistics --- 00:32:49.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.992 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:32:49.992 00:32:49.992 --- 10.0.0.1 ping statistics --- 00:32:49.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.992 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1554871 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1554871 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 1554871 ']' 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:49.992 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 [2024-06-10 14:01:03.120213] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:49.992 [2024-06-10 14:01:03.120271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:49.992 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.992 [2024-06-10 14:01:03.240604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:49.992 [2024-06-10 14:01:03.328145] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.992 [2024-06-10 14:01:03.328194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.992 [2024-06-10 14:01:03.328207] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:49.992 [2024-06-10 14:01:03.328219] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:49.992 [2024-06-10 14:01:03.328229] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.992 [2024-06-10 14:01:03.328298] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.992 [2024-06-10 14:01:03.328389] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:32:49.992 [2024-06-10 14:01:03.328503] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.992 [2024-06-10 14:01:03.328503] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 [2024-06-10 14:01:04.041686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 Malloc0 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 [2024-06-10 14:01:04.142003] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.992 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:49.992 [ 00:32:49.992 { 00:32:49.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:49.992 "subtype": "Discovery", 00:32:49.992 "listen_addresses": [ 00:32:49.992 { 00:32:49.992 "trtype": "TCP", 00:32:49.992 "adrfam": "IPv4", 00:32:49.992 "traddr": "10.0.0.2", 00:32:49.992 "trsvcid": "4420" 00:32:49.992 } 00:32:49.992 ], 00:32:49.992 "allow_any_host": true, 00:32:49.992 "hosts": [] 00:32:49.992 }, 00:32:49.992 { 00:32:49.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.992 "subtype": "NVMe", 00:32:49.992 "listen_addresses": [ 00:32:49.992 { 00:32:49.992 "trtype": "TCP", 00:32:49.992 "adrfam": "IPv4", 00:32:49.992 "traddr": "10.0.0.2", 00:32:49.992 "trsvcid": "4420" 00:32:49.993 } 00:32:49.993 ], 00:32:49.993 "allow_any_host": true, 00:32:49.993 "hosts": [], 00:32:49.993 "serial_number": "SPDK00000000000001", 00:32:49.993 "model_number": "SPDK bdev Controller", 00:32:49.993 "max_namespaces": 32, 00:32:49.993 "min_cntlid": 1, 00:32:49.993 "max_cntlid": 65519, 00:32:49.993 "namespaces": [ 00:32:49.993 { 00:32:49.993 "nsid": 1, 00:32:49.993 "bdev_name": "Malloc0", 00:32:49.993 "name": "Malloc0", 00:32:49.993 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:49.993 "eui64": "ABCDEF0123456789", 00:32:49.993 "uuid": "79aa67ee-eaf7-4de5-8e04-0a1862f54760" 00:32:49.993 } 00:32:49.993 ] 00:32:49.993 } 00:32:49.993 ] 00:32:49.993 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.993 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:49.993 [2024-06-10 14:01:04.199655] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:49.993 [2024-06-10 14:01:04.199697] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555151 ] 00:32:49.993 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.993 [2024-06-10 14:01:04.235128] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:49.993 [2024-06-10 14:01:04.235182] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:49.993 [2024-06-10 14:01:04.235190] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:49.993 [2024-06-10 14:01:04.235205] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:49.993 [2024-06-10 14:01:04.235218] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:49.993 [2024-06-10 14:01:04.238630] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:49.993 [2024-06-10 14:01:04.238671] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd21f00 0 00:32:49.993 [2024-06-10 14:01:04.246587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:49.993 [2024-06-10 14:01:04.246605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:49.993 [2024-06-10 14:01:04.246612] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:49.993 [2024-06-10 14:01:04.246619] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:49.993 [2024-06-10 14:01:04.246671] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.246680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.246687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.993 [2024-06-10 14:01:04.246703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:49.993 [2024-06-10 14:01:04.246725] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.993 [2024-06-10 14:01:04.253587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.993 [2024-06-10 14:01:04.253601] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.993 [2024-06-10 14:01:04.253607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.253615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.993 [2024-06-10 14:01:04.253631] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:49.993 [2024-06-10 14:01:04.253640] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:49.993 [2024-06-10 14:01:04.253650] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:49.993 [2024-06-10 14:01:04.253669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.253676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.253683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.993 [2024-06-10 14:01:04.253694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.993 [2024-06-10 14:01:04.253712] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.993 [2024-06-10 14:01:04.253932] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.993 [2024-06-10 14:01:04.253942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.993 [2024-06-10 14:01:04.253948] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.253956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.993 [2024-06-10 14:01:04.253964] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:49.993 [2024-06-10 14:01:04.253977] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:49.993 [2024-06-10 14:01:04.253988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.253995] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.993 [2024-06-10 14:01:04.254011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.993 [2024-06-10 14:01:04.254027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.993 [2024-06-10 14:01:04.254141] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.993 [2024-06-10 14:01:04.254151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.993 [2024-06-10 14:01:04.254157] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254164] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.993 [2024-06-10 14:01:04.254172] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:49.993 [2024-06-10 14:01:04.254186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:49.993 [2024-06-10 14:01:04.254197] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254204] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.993 [2024-06-10 14:01:04.254220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.993 [2024-06-10 14:01:04.254235] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.993 [2024-06-10 14:01:04.254409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.993 [2024-06-10 14:01:04.254418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.993 [2024-06-10 14:01:04.254425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254431] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.993 [2024-06-10 14:01:04.254440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:49.993 [2024-06-10 14:01:04.254454] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254461] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.993 [2024-06-10 14:01:04.254478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.993 [2024-06-10 14:01:04.254493] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.993 [2024-06-10 14:01:04.254608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.993 [2024-06-10 14:01:04.254619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.993 [2024-06-10 14:01:04.254625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254634] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.993 [2024-06-10 14:01:04.254642] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:49.993 [2024-06-10 14:01:04.254651] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:49.993 [2024-06-10 14:01:04.254664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:49.993 [2024-06-10 14:01:04.254773] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:49.993 [2024-06-10 14:01:04.254782] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:49.993 [2024-06-10 14:01:04.254794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254801] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254807] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.993 [2024-06-10 14:01:04.254817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.993 [2024-06-10 14:01:04.254833] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.993 [2024-06-10 14:01:04.254948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.993 [2024-06-10 14:01:04.254958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.993 [2024-06-10 14:01:04.254964] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.993 [2024-06-10 14:01:04.254971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.993 [2024-06-10 14:01:04.254979] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:49.994 [2024-06-10 14:01:04.254993] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.255017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.994 [2024-06-10 14:01:04.255032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.994 [2024-06-10 14:01:04.255199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.994 [2024-06-10 14:01:04.255208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.994 [2024-06-10 14:01:04.255215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.994 [2024-06-10 14:01:04.255229] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:49.994 [2024-06-10 14:01:04.255238] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:49.994 [2024-06-10 14:01:04.255251] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:49.994 [2024-06-10 14:01:04.255264] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:49.994 [2024-06-10 14:01:04.255277] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.255296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.994 [2024-06-10 14:01:04.255312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.994 [2024-06-10 14:01:04.255518] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:49.994 [2024-06-10 14:01:04.255527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:49.994 [2024-06-10 14:01:04.255534] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255541] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21f00): datao=0, datal=4096, cccid=0 00:32:49.994 [2024-06-10 14:01:04.255549] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8cdf0) on tqpair(0xd21f00): expected_datao=0, payload_size=4096 00:32:49.994 [2024-06-10 14:01:04.255558] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255607] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.255616] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.994 [2024-06-10 14:01:04.298599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.994 [2024-06-10 14:01:04.298606] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.994 [2024-06-10 14:01:04.298625] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:49.994 [2024-06-10 14:01:04.298634] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:49.994 [2024-06-10 14:01:04.298642] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:49.994 [2024-06-10 14:01:04.298655] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:49.994 [2024-06-10 14:01:04.298664] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:49.994 [2024-06-10 14:01:04.298673] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:49.994 [2024-06-10 14:01:04.298687] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:49.994 [2024-06-10 14:01:04.298697] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298705] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298711] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.298722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:49.994 [2024-06-10 14:01:04.298741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.994 [2024-06-10 14:01:04.298936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.994 [2024-06-10 14:01:04.298946] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.994 [2024-06-10 14:01:04.298953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298959] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8cdf0) on tqpair=0xd21f00 00:32:49.994 [2024-06-10 14:01:04.298970] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.298983] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.298992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.994 [2024-06-10 14:01:04.299006] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.299028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.994 [2024-06-10 14:01:04.299038] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.299059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.994 [2024-06-10 14:01:04.299069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.299091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.994 [2024-06-10 14:01:04.299099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:49.994 [2024-06-10 14:01:04.299117] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:49.994 [2024-06-10 14:01:04.299128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299134] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.299144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.994 [2024-06-10 14:01:04.299162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cdf0, cid 0, qid 0 00:32:49.994 [2024-06-10 14:01:04.299171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8cf50, cid 1, qid 0 00:32:49.994 [2024-06-10 14:01:04.299178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d0b0, cid 2, qid 0 00:32:49.994 [2024-06-10 14:01:04.299186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.994 [2024-06-10 14:01:04.299194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d370, cid 4, qid 0 00:32:49.994 [2024-06-10 14:01:04.299423] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.994 [2024-06-10 14:01:04.299433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.994 [2024-06-10 14:01:04.299439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d370) on tqpair=0xd21f00 00:32:49.994 [2024-06-10 14:01:04.299454] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:49.994 [2024-06-10 14:01:04.299463] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:49.994 [2024-06-10 14:01:04.299479] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21f00) 00:32:49.994 [2024-06-10 14:01:04.299495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.994 [2024-06-10 14:01:04.299511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d370, cid 4, qid 0 00:32:49.994 [2024-06-10 14:01:04.299648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:49.994 [2024-06-10 14:01:04.299659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:49.994 [2024-06-10 14:01:04.299665] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299672] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21f00): datao=0, datal=4096, cccid=4 00:32:49.994 [2024-06-10 14:01:04.299680] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8d370) on tqpair(0xd21f00): expected_datao=0, payload_size=4096 00:32:49.994 [2024-06-10 14:01:04.299688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299698] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299705] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.994 [2024-06-10 14:01:04.299806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.994 [2024-06-10 14:01:04.299812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d370) on tqpair=0xd21f00 00:32:49.994 [2024-06-10 14:01:04.299837] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:49.994 [2024-06-10 14:01:04.299867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.994 [2024-06-10 14:01:04.299874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21f00) 00:32:49.995 [2024-06-10 14:01:04.299884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.995 [2024-06-10 14:01:04.299895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.299901] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.299908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd21f00) 00:32:49.995 [2024-06-10 14:01:04.299917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.995 [2024-06-10 14:01:04.299941] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d370, cid 4, qid 0 00:32:49.995 [2024-06-10 14:01:04.299950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d4d0, cid 5, qid 0 00:32:49.995 [2024-06-10 14:01:04.300161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:49.995 [2024-06-10 14:01:04.300170] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:49.995 [2024-06-10 14:01:04.300176] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.300183] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21f00): datao=0, datal=1024, cccid=4 00:32:49.995 [2024-06-10 14:01:04.300191] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8d370) on tqpair(0xd21f00): expected_datao=0, payload_size=1024 00:32:49.995 [2024-06-10 14:01:04.300199] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.300209] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.300215] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.300224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.995 [2024-06-10 14:01:04.300233] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.995 [2024-06-10 14:01:04.300239] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.300246] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d4d0) on tqpair=0xd21f00 00:32:49.995 [2024-06-10 14:01:04.340729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.995 [2024-06-10 14:01:04.340746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.995 [2024-06-10 14:01:04.340752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.340763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d370) on tqpair=0xd21f00 00:32:49.995 [2024-06-10 14:01:04.340784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.340792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21f00) 00:32:49.995 [2024-06-10 14:01:04.340803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.995 [2024-06-10 14:01:04.340827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d370, cid 4, qid 0 00:32:49.995 [2024-06-10 14:01:04.341022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:49.995 [2024-06-10 14:01:04.341032] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:49.995 [2024-06-10 14:01:04.341038] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341045] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21f00): datao=0, datal=3072, cccid=4 00:32:49.995 [2024-06-10 14:01:04.341053] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8d370) on tqpair(0xd21f00): expected_datao=0, payload_size=3072 00:32:49.995 [2024-06-10 14:01:04.341061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341156] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341163] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.995 [2024-06-10 14:01:04.341256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.995 [2024-06-10 14:01:04.341262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d370) on tqpair=0xd21f00 00:32:49.995 [2024-06-10 14:01:04.341282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341289] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21f00) 00:32:49.995 [2024-06-10 14:01:04.341299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.995 [2024-06-10 14:01:04.341320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d370, cid 4, qid 0 00:32:49.995 [2024-06-10 14:01:04.341502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:49.995 [2024-06-10 14:01:04.341511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:49.995 [2024-06-10 14:01:04.341517] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341524] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21f00): datao=0, datal=8, cccid=4 00:32:49.995 [2024-06-10 14:01:04.341532] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd8d370) on tqpair(0xd21f00): expected_datao=0, payload_size=8 00:32:49.995 [2024-06-10 14:01:04.341540] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341549] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.341556] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.383590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.995 [2024-06-10 14:01:04.383604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.995 [2024-06-10 14:01:04.383611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.995 [2024-06-10 14:01:04.383618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d370) on tqpair=0xd21f00 00:32:49.995 ===================================================== 00:32:49.995 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:49.995 ===================================================== 00:32:49.995 Controller Capabilities/Features 00:32:49.995 ================================ 00:32:49.995 Vendor ID: 0000 00:32:49.995 Subsystem Vendor ID: 0000 00:32:49.995 Serial Number: .................... 00:32:49.995 Model Number: ........................................ 00:32:49.995 Firmware Version: 24.09 00:32:49.995 Recommended Arb Burst: 0 00:32:49.995 IEEE OUI Identifier: 00 00 00 00:32:49.995 Multi-path I/O 00:32:49.995 May have multiple subsystem ports: No 00:32:49.995 May have multiple controllers: No 00:32:49.995 Associated with SR-IOV VF: No 00:32:49.995 Max Data Transfer Size: 131072 00:32:49.995 Max Number of Namespaces: 0 00:32:49.995 Max Number of I/O Queues: 1024 00:32:49.995 NVMe Specification Version (VS): 1.3 00:32:49.995 NVMe Specification Version (Identify): 1.3 00:32:49.995 Maximum Queue Entries: 128 00:32:49.995 Contiguous Queues Required: Yes 00:32:49.995 Arbitration Mechanisms Supported 00:32:49.995 Weighted Round Robin: Not Supported 00:32:49.995 Vendor Specific: Not Supported 00:32:49.995 Reset Timeout: 15000 ms 00:32:49.995 Doorbell Stride: 4 bytes 00:32:49.995 NVM Subsystem Reset: Not Supported 00:32:49.995 Command Sets Supported 00:32:49.995 NVM Command Set: Supported 00:32:49.995 Boot Partition: Not Supported 00:32:49.995 Memory Page Size Minimum: 4096 bytes 00:32:49.995 Memory Page Size Maximum: 4096 bytes 00:32:49.995 Persistent Memory Region: Not Supported 00:32:49.995 Optional Asynchronous Events Supported 00:32:49.995 Namespace Attribute Notices: Not Supported 00:32:49.995 Firmware Activation Notices: Not Supported 00:32:49.995 ANA Change Notices: Not Supported 00:32:49.995 PLE Aggregate Log Change Notices: Not Supported 00:32:49.995 LBA Status Info Alert Notices: Not Supported 00:32:49.995 EGE Aggregate Log Change Notices: Not Supported 00:32:49.995 Normal NVM Subsystem Shutdown event: Not Supported 00:32:49.995 Zone Descriptor Change Notices: Not Supported 00:32:49.995 Discovery Log Change Notices: Supported 00:32:49.995 Controller Attributes 00:32:49.995 128-bit Host Identifier: Not Supported 00:32:49.995 Non-Operational Permissive Mode: Not Supported 00:32:49.995 NVM Sets: Not Supported 00:32:49.995 Read Recovery Levels: Not Supported 00:32:49.995 Endurance Groups: Not Supported 00:32:49.995 Predictable Latency Mode: Not Supported 00:32:49.995 Traffic Based Keep ALive: Not Supported 00:32:49.995 Namespace Granularity: Not Supported 00:32:49.995 SQ Associations: Not Supported 00:32:49.995 UUID List: Not Supported 00:32:49.995 Multi-Domain Subsystem: Not Supported 00:32:49.995 Fixed Capacity Management: Not Supported 00:32:49.995 Variable Capacity Management: Not Supported 00:32:49.995 Delete Endurance Group: Not Supported 00:32:49.995 Delete NVM Set: Not Supported 00:32:49.995 Extended LBA Formats Supported: Not Supported 00:32:49.995 Flexible Data Placement Supported: Not Supported 00:32:49.995 00:32:49.995 Controller Memory Buffer Support 00:32:49.995 ================================ 00:32:49.995 Supported: No 00:32:49.995 00:32:49.995 Persistent Memory Region Support 00:32:49.995 ================================ 00:32:49.995 Supported: No 00:32:49.995 00:32:49.995 Admin Command Set Attributes 00:32:49.995 ============================ 00:32:49.995 Security Send/Receive: Not Supported 00:32:49.995 Format NVM: Not Supported 00:32:49.995 Firmware Activate/Download: Not Supported 00:32:49.996 Namespace Management: Not Supported 00:32:49.996 Device Self-Test: Not Supported 00:32:49.996 Directives: Not Supported 00:32:49.996 NVMe-MI: Not Supported 00:32:49.996 Virtualization Management: Not Supported 00:32:49.996 Doorbell Buffer Config: Not Supported 00:32:49.996 Get LBA Status Capability: Not Supported 00:32:49.996 Command & Feature Lockdown Capability: Not Supported 00:32:49.996 Abort Command Limit: 1 00:32:49.996 Async Event Request Limit: 4 00:32:49.996 Number of Firmware Slots: N/A 00:32:49.996 Firmware Slot 1 Read-Only: N/A 00:32:49.996 Firmware Activation Without Reset: N/A 00:32:49.996 Multiple Update Detection Support: N/A 00:32:49.996 Firmware Update Granularity: No Information Provided 00:32:49.996 Per-Namespace SMART Log: No 00:32:49.996 Asymmetric Namespace Access Log Page: Not Supported 00:32:49.996 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:49.996 Command Effects Log Page: Not Supported 00:32:49.996 Get Log Page Extended Data: Supported 00:32:49.996 Telemetry Log Pages: Not Supported 00:32:49.996 Persistent Event Log Pages: Not Supported 00:32:49.996 Supported Log Pages Log Page: May Support 00:32:49.996 Commands Supported & Effects Log Page: Not Supported 00:32:49.996 Feature Identifiers & Effects Log Page:May Support 00:32:49.996 NVMe-MI Commands & Effects Log Page: May Support 00:32:49.996 Data Area 4 for Telemetry Log: Not Supported 00:32:49.996 Error Log Page Entries Supported: 128 00:32:49.996 Keep Alive: Not Supported 00:32:49.996 00:32:49.996 NVM Command Set Attributes 00:32:49.996 ========================== 00:32:49.996 Submission Queue Entry Size 00:32:49.996 Max: 1 00:32:49.996 Min: 1 00:32:49.996 Completion Queue Entry Size 00:32:49.996 Max: 1 00:32:49.996 Min: 1 00:32:49.996 Number of Namespaces: 0 00:32:49.996 Compare Command: Not Supported 00:32:49.996 Write Uncorrectable Command: Not Supported 00:32:49.996 Dataset Management Command: Not Supported 00:32:49.996 Write Zeroes Command: Not Supported 00:32:49.996 Set Features Save Field: Not Supported 00:32:49.996 Reservations: Not Supported 00:32:49.996 Timestamp: Not Supported 00:32:49.996 Copy: Not Supported 00:32:49.996 Volatile Write Cache: Not Present 00:32:49.996 Atomic Write Unit (Normal): 1 00:32:49.996 Atomic Write Unit (PFail): 1 00:32:49.996 Atomic Compare & Write Unit: 1 00:32:49.996 Fused Compare & Write: Supported 00:32:49.996 Scatter-Gather List 00:32:49.996 SGL Command Set: Supported 00:32:49.996 SGL Keyed: Supported 00:32:49.996 SGL Bit Bucket Descriptor: Not Supported 00:32:49.996 SGL Metadata Pointer: Not Supported 00:32:49.996 Oversized SGL: Not Supported 00:32:49.996 SGL Metadata Address: Not Supported 00:32:49.996 SGL Offset: Supported 00:32:49.996 Transport SGL Data Block: Not Supported 00:32:49.996 Replay Protected Memory Block: Not Supported 00:32:49.996 00:32:49.996 Firmware Slot Information 00:32:49.996 ========================= 00:32:49.996 Active slot: 0 00:32:49.996 00:32:49.996 00:32:49.996 Error Log 00:32:49.996 ========= 00:32:49.996 00:32:49.996 Active Namespaces 00:32:49.996 ================= 00:32:49.996 Discovery Log Page 00:32:49.996 ================== 00:32:49.996 Generation Counter: 2 00:32:49.996 Number of Records: 2 00:32:49.996 Record Format: 0 00:32:49.996 00:32:49.996 Discovery Log Entry 0 00:32:49.996 ---------------------- 00:32:49.996 Transport Type: 3 (TCP) 00:32:49.996 Address Family: 1 (IPv4) 00:32:49.996 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:49.996 Entry Flags: 00:32:49.996 Duplicate Returned Information: 1 00:32:49.996 Explicit Persistent Connection Support for Discovery: 1 00:32:49.996 Transport Requirements: 00:32:49.996 Secure Channel: Not Required 00:32:49.996 Port ID: 0 (0x0000) 00:32:49.996 Controller ID: 65535 (0xffff) 00:32:49.996 Admin Max SQ Size: 128 00:32:49.996 Transport Service Identifier: 4420 00:32:49.996 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:49.996 Transport Address: 10.0.0.2 00:32:49.996 Discovery Log Entry 1 00:32:49.996 ---------------------- 00:32:49.996 Transport Type: 3 (TCP) 00:32:49.996 Address Family: 1 (IPv4) 00:32:49.996 Subsystem Type: 2 (NVM Subsystem) 00:32:49.996 Entry Flags: 00:32:49.996 Duplicate Returned Information: 0 00:32:49.996 Explicit Persistent Connection Support for Discovery: 0 00:32:49.996 Transport Requirements: 00:32:49.996 Secure Channel: Not Required 00:32:49.996 Port ID: 0 (0x0000) 00:32:49.996 Controller ID: 65535 (0xffff) 00:32:49.996 Admin Max SQ Size: 128 00:32:49.996 Transport Service Identifier: 4420 00:32:49.996 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:49.996 Transport Address: 10.0.0.2 [2024-06-10 14:01:04.383732] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:49.996 [2024-06-10 14:01:04.383751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.996 [2024-06-10 14:01:04.383762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.996 [2024-06-10 14:01:04.383774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.996 [2024-06-10 14:01:04.383785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.996 [2024-06-10 14:01:04.383798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.996 [2024-06-10 14:01:04.383805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.996 [2024-06-10 14:01:04.383812] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.996 [2024-06-10 14:01:04.383824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.996 [2024-06-10 14:01:04.383845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.996 [2024-06-10 14:01:04.384007] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.996 [2024-06-10 14:01:04.384017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.996 [2024-06-10 14:01:04.384024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.996 [2024-06-10 14:01:04.384031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.996 [2024-06-10 14:01:04.384045] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.996 [2024-06-10 14:01:04.384053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.996 [2024-06-10 14:01:04.384059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.384070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.384091] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.384290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.384299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.384306] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.384320] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:49.997 [2024-06-10 14:01:04.384329] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:49.997 [2024-06-10 14:01:04.384343] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384357] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.384367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.384382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.384548] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.384557] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.384564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.384593] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384600] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384607] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.384616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.384636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.384754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.384764] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.384772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384779] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.384794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384801] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384808] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.384818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.384834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.384939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.384949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.384956] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.384978] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.384992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.385004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.385019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.385131] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.385142] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.385149] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385156] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.385170] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385177] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.385194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.385211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.385380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.385389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.385396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385402] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.385416] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385423] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.385439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.385457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.385570] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.385589] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.385596] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385603] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.385617] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.385640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.385656] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.385820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.385829] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.385835] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385842] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.385856] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.385869] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.385879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.385894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.386009] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.386018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.386025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386032] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.386046] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.386069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.386084] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.386185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.386195] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.386201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.386222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.386245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.386260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.386366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.386376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.386382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.997 [2024-06-10 14:01:04.386403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.997 [2024-06-10 14:01:04.386416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.997 [2024-06-10 14:01:04.386426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.997 [2024-06-10 14:01:04.386441] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.997 [2024-06-10 14:01:04.386550] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.997 [2024-06-10 14:01:04.386559] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.997 [2024-06-10 14:01:04.386566] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.386572] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.998 [2024-06-10 14:01:04.386593] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.386600] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.386607] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.998 [2024-06-10 14:01:04.386616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.998 [2024-06-10 14:01:04.386632] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.998 [2024-06-10 14:01:04.386737] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.998 [2024-06-10 14:01:04.386747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.998 [2024-06-10 14:01:04.386753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.386760] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.998 [2024-06-10 14:01:04.386774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.386781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.386788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.998 [2024-06-10 14:01:04.386797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.998 [2024-06-10 14:01:04.386813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.998 [2024-06-10 14:01:04.386978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.998 [2024-06-10 14:01:04.386988] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.998 [2024-06-10 14:01:04.386994] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.387001] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.998 [2024-06-10 14:01:04.387015] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.387022] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.387028] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.998 [2024-06-10 14:01:04.387038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.998 [2024-06-10 14:01:04.387053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.998 [2024-06-10 14:01:04.390587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.998 [2024-06-10 14:01:04.390603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.998 [2024-06-10 14:01:04.390609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.390616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.998 [2024-06-10 14:01:04.390632] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.390639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.390645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21f00) 00:32:49.998 [2024-06-10 14:01:04.390655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.998 [2024-06-10 14:01:04.390673] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd8d210, cid 3, qid 0 00:32:49.998 [2024-06-10 14:01:04.390789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:49.998 [2024-06-10 14:01:04.390799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:49.998 [2024-06-10 14:01:04.390805] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:49.998 [2024-06-10 14:01:04.390812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd8d210) on tqpair=0xd21f00 00:32:49.998 [2024-06-10 14:01:04.390824] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:32:49.998 00:32:49.998 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:49.998 [2024-06-10 14:01:04.437335] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:32:49.998 [2024-06-10 14:01:04.437388] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555163 ] 00:32:49.998 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.261 [2024-06-10 14:01:04.473787] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:50.261 [2024-06-10 14:01:04.473838] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:50.261 [2024-06-10 14:01:04.473846] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:50.261 [2024-06-10 14:01:04.473861] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:50.261 [2024-06-10 14:01:04.473872] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:50.261 [2024-06-10 14:01:04.474275] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:50.261 [2024-06-10 14:01:04.474307] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xad4f00 0 00:32:50.261 [2024-06-10 14:01:04.487590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:50.262 [2024-06-10 14:01:04.487610] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:50.262 [2024-06-10 14:01:04.487617] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:50.262 [2024-06-10 14:01:04.487623] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:50.262 [2024-06-10 14:01:04.487668] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.487676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.487683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.487697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:50.262 [2024-06-10 14:01:04.487723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.494589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.494601] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.494607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.494614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.494631] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:50.262 [2024-06-10 14:01:04.494640] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:50.262 [2024-06-10 14:01:04.494649] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:50.262 [2024-06-10 14:01:04.494666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.494673] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.494679] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.494690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.494709] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.494897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.494907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.494913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.494920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.494928] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:50.262 [2024-06-10 14:01:04.494941] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:50.262 [2024-06-10 14:01:04.494952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.494958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.494964] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.494974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.494991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.495093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.495103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.495109] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495116] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.495124] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:50.262 [2024-06-10 14:01:04.495137] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:50.262 [2024-06-10 14:01:04.495147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495154] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495160] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.495170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.495186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.495345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.495355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.495361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495367] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.495375] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:50.262 [2024-06-10 14:01:04.495390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.495413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.495428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.495525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.495534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.495541] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.495554] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:50.262 [2024-06-10 14:01:04.495563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:50.262 [2024-06-10 14:01:04.495584] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:50.262 [2024-06-10 14:01:04.495693] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:50.262 [2024-06-10 14:01:04.495700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:50.262 [2024-06-10 14:01:04.495712] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495725] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.495734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.495751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.495849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.495858] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.495865] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.495879] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:50.262 [2024-06-10 14:01:04.495893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.495906] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.495916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.495934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.496035] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.496044] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.496050] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.496057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.262 [2024-06-10 14:01:04.496064] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:50.262 [2024-06-10 14:01:04.496072] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:50.262 [2024-06-10 14:01:04.496085] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:50.262 [2024-06-10 14:01:04.496103] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:50.262 [2024-06-10 14:01:04.496115] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.496122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.262 [2024-06-10 14:01:04.496132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.262 [2024-06-10 14:01:04.496148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.262 [2024-06-10 14:01:04.496322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.262 [2024-06-10 14:01:04.496332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.262 [2024-06-10 14:01:04.496339] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.496345] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=4096, cccid=0 00:32:50.262 [2024-06-10 14:01:04.496354] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb3fdf0) on tqpair(0xad4f00): expected_datao=0, payload_size=4096 00:32:50.262 [2024-06-10 14:01:04.496362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.496372] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.496378] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.540586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.262 [2024-06-10 14:01:04.540599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.262 [2024-06-10 14:01:04.540606] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.262 [2024-06-10 14:01:04.540612] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.263 [2024-06-10 14:01:04.540625] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:50.263 [2024-06-10 14:01:04.540633] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:50.263 [2024-06-10 14:01:04.540641] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:50.263 [2024-06-10 14:01:04.540652] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:50.263 [2024-06-10 14:01:04.540660] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:50.263 [2024-06-10 14:01:04.540669] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.540682] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.540693] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.540721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:50.263 [2024-06-10 14:01:04.540739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.263 [2024-06-10 14:01:04.540918] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.263 [2024-06-10 14:01:04.540928] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.263 [2024-06-10 14:01:04.540934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb3fdf0) on tqpair=0xad4f00 00:32:50.263 [2024-06-10 14:01:04.540950] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.540972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:50.263 [2024-06-10 14:01:04.540982] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.540995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:50.263 [2024-06-10 14:01:04.541013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541019] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541026] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:50.263 [2024-06-10 14:01:04.541044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541050] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:50.263 [2024-06-10 14:01:04.541074] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541091] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541101] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.263 [2024-06-10 14:01:04.541136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3fdf0, cid 0, qid 0 00:32:50.263 [2024-06-10 14:01:04.541144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb3ff50, cid 1, qid 0 00:32:50.263 [2024-06-10 14:01:04.541152] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb400b0, cid 2, qid 0 00:32:50.263 [2024-06-10 14:01:04.541159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.263 [2024-06-10 14:01:04.541167] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.263 [2024-06-10 14:01:04.541290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.263 [2024-06-10 14:01:04.541300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.263 [2024-06-10 14:01:04.541306] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541313] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.263 [2024-06-10 14:01:04.541321] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:50.263 [2024-06-10 14:01:04.541330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541353] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541363] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541376] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:50.263 [2024-06-10 14:01:04.541402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.263 [2024-06-10 14:01:04.541502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.263 [2024-06-10 14:01:04.541511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.263 [2024-06-10 14:01:04.541518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.263 [2024-06-10 14:01:04.541596] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541612] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541630] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.263 [2024-06-10 14:01:04.541657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.263 [2024-06-10 14:01:04.541772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.263 [2024-06-10 14:01:04.541782] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.263 [2024-06-10 14:01:04.541788] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541795] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=4096, cccid=4 00:32:50.263 [2024-06-10 14:01:04.541803] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb40370) on tqpair(0xad4f00): expected_datao=0, payload_size=4096 00:32:50.263 [2024-06-10 14:01:04.541811] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541821] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541827] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541900] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.263 [2024-06-10 14:01:04.541909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.263 [2024-06-10 14:01:04.541920] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541927] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.263 [2024-06-10 14:01:04.541940] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:50.263 [2024-06-10 14:01:04.541954] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541968] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.541979] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.541985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.263 [2024-06-10 14:01:04.541995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.263 [2024-06-10 14:01:04.542011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.263 [2024-06-10 14:01:04.542130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.263 [2024-06-10 14:01:04.542139] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.263 [2024-06-10 14:01:04.542146] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.542152] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=4096, cccid=4 00:32:50.263 [2024-06-10 14:01:04.542160] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb40370) on tqpair(0xad4f00): expected_datao=0, payload_size=4096 00:32:50.263 [2024-06-10 14:01:04.542168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.542178] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.542184] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.542258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.263 [2024-06-10 14:01:04.542267] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.263 [2024-06-10 14:01:04.542273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.542280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.263 [2024-06-10 14:01:04.542296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.542310] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:50.263 [2024-06-10 14:01:04.542321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.263 [2024-06-10 14:01:04.542328] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.542337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.542353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.264 [2024-06-10 14:01:04.542459] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.264 [2024-06-10 14:01:04.542469] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.264 [2024-06-10 14:01:04.542475] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542482] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=4096, cccid=4 00:32:50.264 [2024-06-10 14:01:04.542490] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb40370) on tqpair(0xad4f00): expected_datao=0, payload_size=4096 00:32:50.264 [2024-06-10 14:01:04.542498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542507] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542516] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.264 [2024-06-10 14:01:04.542597] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.264 [2024-06-10 14:01:04.542603] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542610] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.264 [2024-06-10 14:01:04.542621] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:50.264 [2024-06-10 14:01:04.542634] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:50.264 [2024-06-10 14:01:04.542648] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:50.264 [2024-06-10 14:01:04.542657] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:50.264 [2024-06-10 14:01:04.542666] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:50.264 [2024-06-10 14:01:04.542675] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:50.264 [2024-06-10 14:01:04.542683] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:50.264 [2024-06-10 14:01:04.542692] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:50.264 [2024-06-10 14:01:04.542714] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542721] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.542731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.542741] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542754] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.542763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:50.264 [2024-06-10 14:01:04.542783] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.264 [2024-06-10 14:01:04.542791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb404d0, cid 5, qid 0 00:32:50.264 [2024-06-10 14:01:04.542959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.264 [2024-06-10 14:01:04.542968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.264 [2024-06-10 14:01:04.542974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.542981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.264 [2024-06-10 14:01:04.542990] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.264 [2024-06-10 14:01:04.542999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.264 [2024-06-10 14:01:04.543005] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543012] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb404d0) on tqpair=0xad4f00 00:32:50.264 [2024-06-10 14:01:04.543026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb404d0, cid 5, qid 0 00:32:50.264 [2024-06-10 14:01:04.543160] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.264 [2024-06-10 14:01:04.543170] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.264 [2024-06-10 14:01:04.543176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb404d0) on tqpair=0xad4f00 00:32:50.264 [2024-06-10 14:01:04.543197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb404d0, cid 5, qid 0 00:32:50.264 [2024-06-10 14:01:04.543325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.264 [2024-06-10 14:01:04.543335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.264 [2024-06-10 14:01:04.543341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543347] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb404d0) on tqpair=0xad4f00 00:32:50.264 [2024-06-10 14:01:04.543361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543368] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb404d0, cid 5, qid 0 00:32:50.264 [2024-06-10 14:01:04.543549] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.264 [2024-06-10 14:01:04.543558] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.264 [2024-06-10 14:01:04.543564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb404d0) on tqpair=0xad4f00 00:32:50.264 [2024-06-10 14:01:04.543595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543622] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543655] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543675] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543681] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xad4f00) 00:32:50.264 [2024-06-10 14:01:04.543690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.264 [2024-06-10 14:01:04.543710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb404d0, cid 5, qid 0 00:32:50.264 [2024-06-10 14:01:04.543719] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40370, cid 4, qid 0 00:32:50.264 [2024-06-10 14:01:04.543726] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40630, cid 6, qid 0 00:32:50.264 [2024-06-10 14:01:04.543734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40790, cid 7, qid 0 00:32:50.264 [2024-06-10 14:01:04.543886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.264 [2024-06-10 14:01:04.543896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.264 [2024-06-10 14:01:04.543903] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.543909] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=8192, cccid=5 00:32:50.264 [2024-06-10 14:01:04.543917] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb404d0) on tqpair(0xad4f00): expected_datao=0, payload_size=8192 00:32:50.264 [2024-06-10 14:01:04.543925] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544108] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544115] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544123] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.264 [2024-06-10 14:01:04.544132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.264 [2024-06-10 14:01:04.544138] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544144] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=512, cccid=4 00:32:50.264 [2024-06-10 14:01:04.544153] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb40370) on tqpair(0xad4f00): expected_datao=0, payload_size=512 00:32:50.264 [2024-06-10 14:01:04.544160] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544170] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544176] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.264 [2024-06-10 14:01:04.544193] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.264 [2024-06-10 14:01:04.544199] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.264 [2024-06-10 14:01:04.544205] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=512, cccid=6 00:32:50.264 [2024-06-10 14:01:04.544213] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb40630) on tqpair(0xad4f00): expected_datao=0, payload_size=512 00:32:50.264 [2024-06-10 14:01:04.544221] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544230] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544237] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:50.265 [2024-06-10 14:01:04.544254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:50.265 [2024-06-10 14:01:04.544260] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544266] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad4f00): datao=0, datal=4096, cccid=7 00:32:50.265 [2024-06-10 14:01:04.544274] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb40790) on tqpair(0xad4f00): expected_datao=0, payload_size=4096 00:32:50.265 [2024-06-10 14:01:04.544282] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544292] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544298] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544310] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.265 [2024-06-10 14:01:04.544318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.265 [2024-06-10 14:01:04.544326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544333] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb404d0) on tqpair=0xad4f00 00:32:50.265 [2024-06-10 14:01:04.544350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.265 [2024-06-10 14:01:04.544359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.265 [2024-06-10 14:01:04.544366] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40370) on tqpair=0xad4f00 00:32:50.265 [2024-06-10 14:01:04.544385] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.265 [2024-06-10 14:01:04.544394] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.265 [2024-06-10 14:01:04.544400] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544406] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40630) on tqpair=0xad4f00 00:32:50.265 [2024-06-10 14:01:04.544419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.265 [2024-06-10 14:01:04.544428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.265 [2024-06-10 14:01:04.544434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.265 [2024-06-10 14:01:04.544441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40790) on tqpair=0xad4f00 00:32:50.265 ===================================================== 00:32:50.265 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:50.265 ===================================================== 00:32:50.265 Controller Capabilities/Features 00:32:50.265 ================================ 00:32:50.265 Vendor ID: 8086 00:32:50.265 Subsystem Vendor ID: 8086 00:32:50.265 Serial Number: SPDK00000000000001 00:32:50.265 Model Number: SPDK bdev Controller 00:32:50.265 Firmware Version: 24.09 00:32:50.265 Recommended Arb Burst: 6 00:32:50.265 IEEE OUI Identifier: e4 d2 5c 00:32:50.265 Multi-path I/O 00:32:50.265 May have multiple subsystem ports: Yes 00:32:50.265 May have multiple controllers: Yes 00:32:50.265 Associated with SR-IOV VF: No 00:32:50.265 Max Data Transfer Size: 131072 00:32:50.265 Max Number of Namespaces: 32 00:32:50.265 Max Number of I/O Queues: 127 00:32:50.265 NVMe Specification Version (VS): 1.3 00:32:50.265 NVMe Specification Version (Identify): 1.3 00:32:50.265 Maximum Queue Entries: 128 00:32:50.265 Contiguous Queues Required: Yes 00:32:50.265 Arbitration Mechanisms Supported 00:32:50.265 Weighted Round Robin: Not Supported 00:32:50.265 Vendor Specific: Not Supported 00:32:50.265 Reset Timeout: 15000 ms 00:32:50.265 Doorbell Stride: 4 bytes 00:32:50.265 NVM Subsystem Reset: Not Supported 00:32:50.265 Command Sets Supported 00:32:50.265 NVM Command Set: Supported 00:32:50.265 Boot Partition: Not Supported 00:32:50.265 Memory Page Size Minimum: 4096 bytes 00:32:50.265 Memory Page Size Maximum: 4096 bytes 00:32:50.265 Persistent Memory Region: Not Supported 00:32:50.265 Optional Asynchronous Events Supported 00:32:50.265 Namespace Attribute Notices: Supported 00:32:50.265 Firmware Activation Notices: Not Supported 00:32:50.265 ANA Change Notices: Not Supported 00:32:50.265 PLE Aggregate Log Change Notices: Not Supported 00:32:50.265 LBA Status Info Alert Notices: Not Supported 00:32:50.265 EGE Aggregate Log Change Notices: Not Supported 00:32:50.265 Normal NVM Subsystem Shutdown event: Not Supported 00:32:50.265 Zone Descriptor Change Notices: Not Supported 00:32:50.265 Discovery Log Change Notices: Not Supported 00:32:50.265 Controller Attributes 00:32:50.265 128-bit Host Identifier: Supported 00:32:50.265 Non-Operational Permissive Mode: Not Supported 00:32:50.265 NVM Sets: Not Supported 00:32:50.265 Read Recovery Levels: Not Supported 00:32:50.265 Endurance Groups: Not Supported 00:32:50.265 Predictable Latency Mode: Not Supported 00:32:50.265 Traffic Based Keep ALive: Not Supported 00:32:50.265 Namespace Granularity: Not Supported 00:32:50.265 SQ Associations: Not Supported 00:32:50.265 UUID List: Not Supported 00:32:50.265 Multi-Domain Subsystem: Not Supported 00:32:50.265 Fixed Capacity Management: Not Supported 00:32:50.265 Variable Capacity Management: Not Supported 00:32:50.265 Delete Endurance Group: Not Supported 00:32:50.265 Delete NVM Set: Not Supported 00:32:50.265 Extended LBA Formats Supported: Not Supported 00:32:50.265 Flexible Data Placement Supported: Not Supported 00:32:50.265 00:32:50.265 Controller Memory Buffer Support 00:32:50.265 ================================ 00:32:50.265 Supported: No 00:32:50.265 00:32:50.265 Persistent Memory Region Support 00:32:50.265 ================================ 00:32:50.265 Supported: No 00:32:50.265 00:32:50.265 Admin Command Set Attributes 00:32:50.265 ============================ 00:32:50.265 Security Send/Receive: Not Supported 00:32:50.265 Format NVM: Not Supported 00:32:50.265 Firmware Activate/Download: Not Supported 00:32:50.265 Namespace Management: Not Supported 00:32:50.265 Device Self-Test: Not Supported 00:32:50.265 Directives: Not Supported 00:32:50.265 NVMe-MI: Not Supported 00:32:50.265 Virtualization Management: Not Supported 00:32:50.265 Doorbell Buffer Config: Not Supported 00:32:50.265 Get LBA Status Capability: Not Supported 00:32:50.265 Command & Feature Lockdown Capability: Not Supported 00:32:50.265 Abort Command Limit: 4 00:32:50.265 Async Event Request Limit: 4 00:32:50.265 Number of Firmware Slots: N/A 00:32:50.265 Firmware Slot 1 Read-Only: N/A 00:32:50.265 Firmware Activation Without Reset: N/A 00:32:50.265 Multiple Update Detection Support: N/A 00:32:50.265 Firmware Update Granularity: No Information Provided 00:32:50.265 Per-Namespace SMART Log: No 00:32:50.265 Asymmetric Namespace Access Log Page: Not Supported 00:32:50.265 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:50.265 Command Effects Log Page: Supported 00:32:50.265 Get Log Page Extended Data: Supported 00:32:50.265 Telemetry Log Pages: Not Supported 00:32:50.265 Persistent Event Log Pages: Not Supported 00:32:50.265 Supported Log Pages Log Page: May Support 00:32:50.265 Commands Supported & Effects Log Page: Not Supported 00:32:50.265 Feature Identifiers & Effects Log Page:May Support 00:32:50.265 NVMe-MI Commands & Effects Log Page: May Support 00:32:50.265 Data Area 4 for Telemetry Log: Not Supported 00:32:50.265 Error Log Page Entries Supported: 128 00:32:50.265 Keep Alive: Supported 00:32:50.265 Keep Alive Granularity: 10000 ms 00:32:50.265 00:32:50.265 NVM Command Set Attributes 00:32:50.265 ========================== 00:32:50.265 Submission Queue Entry Size 00:32:50.265 Max: 64 00:32:50.265 Min: 64 00:32:50.265 Completion Queue Entry Size 00:32:50.265 Max: 16 00:32:50.265 Min: 16 00:32:50.265 Number of Namespaces: 32 00:32:50.265 Compare Command: Supported 00:32:50.265 Write Uncorrectable Command: Not Supported 00:32:50.265 Dataset Management Command: Supported 00:32:50.265 Write Zeroes Command: Supported 00:32:50.265 Set Features Save Field: Not Supported 00:32:50.265 Reservations: Supported 00:32:50.265 Timestamp: Not Supported 00:32:50.265 Copy: Supported 00:32:50.265 Volatile Write Cache: Present 00:32:50.265 Atomic Write Unit (Normal): 1 00:32:50.265 Atomic Write Unit (PFail): 1 00:32:50.265 Atomic Compare & Write Unit: 1 00:32:50.265 Fused Compare & Write: Supported 00:32:50.265 Scatter-Gather List 00:32:50.265 SGL Command Set: Supported 00:32:50.265 SGL Keyed: Supported 00:32:50.265 SGL Bit Bucket Descriptor: Not Supported 00:32:50.265 SGL Metadata Pointer: Not Supported 00:32:50.265 Oversized SGL: Not Supported 00:32:50.265 SGL Metadata Address: Not Supported 00:32:50.265 SGL Offset: Supported 00:32:50.265 Transport SGL Data Block: Not Supported 00:32:50.265 Replay Protected Memory Block: Not Supported 00:32:50.265 00:32:50.265 Firmware Slot Information 00:32:50.265 ========================= 00:32:50.265 Active slot: 1 00:32:50.265 Slot 1 Firmware Revision: 24.09 00:32:50.265 00:32:50.265 00:32:50.265 Commands Supported and Effects 00:32:50.265 ============================== 00:32:50.265 Admin Commands 00:32:50.265 -------------- 00:32:50.265 Get Log Page (02h): Supported 00:32:50.265 Identify (06h): Supported 00:32:50.265 Abort (08h): Supported 00:32:50.265 Set Features (09h): Supported 00:32:50.266 Get Features (0Ah): Supported 00:32:50.266 Asynchronous Event Request (0Ch): Supported 00:32:50.266 Keep Alive (18h): Supported 00:32:50.266 I/O Commands 00:32:50.266 ------------ 00:32:50.266 Flush (00h): Supported LBA-Change 00:32:50.266 Write (01h): Supported LBA-Change 00:32:50.266 Read (02h): Supported 00:32:50.266 Compare (05h): Supported 00:32:50.266 Write Zeroes (08h): Supported LBA-Change 00:32:50.266 Dataset Management (09h): Supported LBA-Change 00:32:50.266 Copy (19h): Supported LBA-Change 00:32:50.266 Unknown (79h): Supported LBA-Change 00:32:50.266 Unknown (7Ah): Supported 00:32:50.266 00:32:50.266 Error Log 00:32:50.266 ========= 00:32:50.266 00:32:50.266 Arbitration 00:32:50.266 =========== 00:32:50.266 Arbitration Burst: 1 00:32:50.266 00:32:50.266 Power Management 00:32:50.266 ================ 00:32:50.266 Number of Power States: 1 00:32:50.266 Current Power State: Power State #0 00:32:50.266 Power State #0: 00:32:50.266 Max Power: 0.00 W 00:32:50.266 Non-Operational State: Operational 00:32:50.266 Entry Latency: Not Reported 00:32:50.266 Exit Latency: Not Reported 00:32:50.266 Relative Read Throughput: 0 00:32:50.266 Relative Read Latency: 0 00:32:50.266 Relative Write Throughput: 0 00:32:50.266 Relative Write Latency: 0 00:32:50.266 Idle Power: Not Reported 00:32:50.266 Active Power: Not Reported 00:32:50.266 Non-Operational Permissive Mode: Not Supported 00:32:50.266 00:32:50.266 Health Information 00:32:50.266 ================== 00:32:50.266 Critical Warnings: 00:32:50.266 Available Spare Space: OK 00:32:50.266 Temperature: OK 00:32:50.266 Device Reliability: OK 00:32:50.266 Read Only: No 00:32:50.266 Volatile Memory Backup: OK 00:32:50.266 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:50.266 Temperature Threshold: [2024-06-10 14:01:04.544558] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.544566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xad4f00) 00:32:50.266 [2024-06-10 14:01:04.548596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.266 [2024-06-10 14:01:04.548618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40790, cid 7, qid 0 00:32:50.266 [2024-06-10 14:01:04.548815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.266 [2024-06-10 14:01:04.548825] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.266 [2024-06-10 14:01:04.548831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.548837] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40790) on tqpair=0xad4f00 00:32:50.266 [2024-06-10 14:01:04.548878] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:50.266 [2024-06-10 14:01:04.548895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:50.266 [2024-06-10 14:01:04.548906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:50.266 [2024-06-10 14:01:04.548916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:50.266 [2024-06-10 14:01:04.548926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:50.266 [2024-06-10 14:01:04.548938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.548945] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.548951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.266 [2024-06-10 14:01:04.548961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.266 [2024-06-10 14:01:04.548980] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.266 [2024-06-10 14:01:04.549082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.266 [2024-06-10 14:01:04.549091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.266 [2024-06-10 14:01:04.549098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.266 [2024-06-10 14:01:04.549117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549130] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.266 [2024-06-10 14:01:04.549140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.266 [2024-06-10 14:01:04.549159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.266 [2024-06-10 14:01:04.549267] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.266 [2024-06-10 14:01:04.549277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.266 [2024-06-10 14:01:04.549283] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549290] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.266 [2024-06-10 14:01:04.549297] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:50.266 [2024-06-10 14:01:04.549305] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:50.266 [2024-06-10 14:01:04.549320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.266 [2024-06-10 14:01:04.549342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.266 [2024-06-10 14:01:04.549358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.266 [2024-06-10 14:01:04.549461] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.266 [2024-06-10 14:01:04.549471] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.266 [2024-06-10 14:01:04.549477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.266 [2024-06-10 14:01:04.549498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.266 [2024-06-10 14:01:04.549521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.266 [2024-06-10 14:01:04.549536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.266 [2024-06-10 14:01:04.549640] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.266 [2024-06-10 14:01:04.549650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.266 [2024-06-10 14:01:04.549657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.266 [2024-06-10 14:01:04.549677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.266 [2024-06-10 14:01:04.549690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.266 [2024-06-10 14:01:04.549700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.266 [2024-06-10 14:01:04.549716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.266 [2024-06-10 14:01:04.549811] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.266 [2024-06-10 14:01:04.549821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.266 [2024-06-10 14:01:04.549830] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.549836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.549850] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.549857] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.549864] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.549873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.549889] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.549984] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.549994] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.550000] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550007] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.550021] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550028] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550034] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.550043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.550059] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.550163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.550172] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.550179] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.550199] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550212] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.550222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.550237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.550337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.550346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.550352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550359] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.550373] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.550395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.550411] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.550506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.550516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.550522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550531] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.550545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550558] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.550568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.550591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.550684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.550694] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.550700] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550706] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.550720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.550743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.550758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.550855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.550864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.550871] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550877] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.550891] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.550904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.550914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.550929] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.551028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.551038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.551044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.551064] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551071] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551077] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.551087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.551102] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.551198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.551207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.551214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.551237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551250] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.551259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.551275] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.551371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.551380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.551386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551393] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.551407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551414] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551420] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.551429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.551445] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.551541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.551550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.551556] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551563] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.551582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551590] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.267 [2024-06-10 14:01:04.551605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.267 [2024-06-10 14:01:04.551621] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.267 [2024-06-10 14:01:04.551792] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.267 [2024-06-10 14:01:04.551801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.267 [2024-06-10 14:01:04.551807] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.267 [2024-06-10 14:01:04.551828] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551835] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.267 [2024-06-10 14:01:04.551841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.268 [2024-06-10 14:01:04.551850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.268 [2024-06-10 14:01:04.551866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.268 [2024-06-10 14:01:04.551966] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.268 [2024-06-10 14:01:04.551975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.268 [2024-06-10 14:01:04.551981] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.551988] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.268 [2024-06-10 14:01:04.552002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552017] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.268 [2024-06-10 14:01:04.552027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.268 [2024-06-10 14:01:04.552042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.268 [2024-06-10 14:01:04.552142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.268 [2024-06-10 14:01:04.552151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.268 [2024-06-10 14:01:04.552157] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552164] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.268 [2024-06-10 14:01:04.552178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552191] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.268 [2024-06-10 14:01:04.552201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.268 [2024-06-10 14:01:04.552216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.268 [2024-06-10 14:01:04.552312] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.268 [2024-06-10 14:01:04.552321] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.268 [2024-06-10 14:01:04.552328] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552334] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.268 [2024-06-10 14:01:04.552348] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552361] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.268 [2024-06-10 14:01:04.552370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.268 [2024-06-10 14:01:04.552386] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.268 [2024-06-10 14:01:04.552482] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.268 [2024-06-10 14:01:04.552491] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.268 [2024-06-10 14:01:04.552497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.268 [2024-06-10 14:01:04.552517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.552531] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.268 [2024-06-10 14:01:04.552540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.268 [2024-06-10 14:01:04.552555] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.268 [2024-06-10 14:01:04.556587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.268 [2024-06-10 14:01:04.556599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.268 [2024-06-10 14:01:04.556605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.556612] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.268 [2024-06-10 14:01:04.556626] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.556633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.556642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad4f00) 00:32:50.268 [2024-06-10 14:01:04.556652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.268 [2024-06-10 14:01:04.556670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb40210, cid 3, qid 0 00:32:50.268 [2024-06-10 14:01:04.556850] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:50.268 [2024-06-10 14:01:04.556860] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:50.268 [2024-06-10 14:01:04.556866] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:50.268 [2024-06-10 14:01:04.556872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb40210) on tqpair=0xad4f00 00:32:50.268 [2024-06-10 14:01:04.556884] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:32:50.268 0 Kelvin (-273 Celsius) 00:32:50.268 Available Spare: 0% 00:32:50.268 Available Spare Threshold: 0% 00:32:50.268 Life Percentage Used: 0% 00:32:50.268 Data Units Read: 0 00:32:50.268 Data Units Written: 0 00:32:50.268 Host Read Commands: 0 00:32:50.268 Host Write Commands: 0 00:32:50.268 Controller Busy Time: 0 minutes 00:32:50.268 Power Cycles: 0 00:32:50.268 Power On Hours: 0 hours 00:32:50.268 Unsafe Shutdowns: 0 00:32:50.268 Unrecoverable Media Errors: 0 00:32:50.268 Lifetime Error Log Entries: 0 00:32:50.268 Warning Temperature Time: 0 minutes 00:32:50.268 Critical Temperature Time: 0 minutes 00:32:50.268 00:32:50.268 Number of Queues 00:32:50.268 ================ 00:32:50.268 Number of I/O Submission Queues: 127 00:32:50.268 Number of I/O Completion Queues: 127 00:32:50.268 00:32:50.268 Active Namespaces 00:32:50.268 ================= 00:32:50.268 Namespace ID:1 00:32:50.268 Error Recovery Timeout: Unlimited 00:32:50.268 Command Set Identifier: NVM (00h) 00:32:50.268 Deallocate: Supported 00:32:50.268 Deallocated/Unwritten Error: Not Supported 00:32:50.268 Deallocated Read Value: Unknown 00:32:50.268 Deallocate in Write Zeroes: Not Supported 00:32:50.268 Deallocated Guard Field: 0xFFFF 00:32:50.268 Flush: Supported 00:32:50.268 Reservation: Supported 00:32:50.268 Namespace Sharing Capabilities: Multiple Controllers 00:32:50.268 Size (in LBAs): 131072 (0GiB) 00:32:50.268 Capacity (in LBAs): 131072 (0GiB) 00:32:50.268 Utilization (in LBAs): 131072 (0GiB) 00:32:50.268 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:50.268 EUI64: ABCDEF0123456789 00:32:50.268 UUID: 79aa67ee-eaf7-4de5-8e04-0a1862f54760 00:32:50.268 Thin Provisioning: Not Supported 00:32:50.268 Per-NS Atomic Units: Yes 00:32:50.268 Atomic Boundary Size (Normal): 0 00:32:50.268 Atomic Boundary Size (PFail): 0 00:32:50.268 Atomic Boundary Offset: 0 00:32:50.268 Maximum Single Source Range Length: 65535 00:32:50.268 Maximum Copy Length: 65535 00:32:50.268 Maximum Source Range Count: 1 00:32:50.268 NGUID/EUI64 Never Reused: No 00:32:50.268 Namespace Write Protected: No 00:32:50.268 Number of LBA Formats: 1 00:32:50.268 Current LBA Format: LBA Format #00 00:32:50.268 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:50.268 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:50.268 rmmod nvme_tcp 00:32:50.268 rmmod nvme_fabrics 00:32:50.268 rmmod nvme_keyring 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1554871 ']' 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1554871 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 1554871 ']' 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 1554871 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:50.268 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1554871 00:32:50.269 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:50.269 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:50.269 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1554871' 00:32:50.269 killing process with pid 1554871 00:32:50.269 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 1554871 00:32:50.269 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 1554871 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:50.528 14:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.066 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.066 00:32:53.066 real 0m12.718s 00:32:53.066 user 0m8.719s 00:32:53.066 sys 0m7.186s 00:32:53.066 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:53.066 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:53.066 ************************************ 00:32:53.066 END TEST nvmf_identify 00:32:53.066 ************************************ 00:32:53.066 14:01:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:53.066 14:01:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:53.066 14:01:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:53.066 14:01:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:53.066 ************************************ 00:32:53.067 START TEST nvmf_perf 00:32:53.067 ************************************ 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:53.067 * Looking for test storage... 00:32:53.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:53.067 14:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:01.183 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:01.183 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:01.183 Found net devices under 0000:af:00.0: cvl_0_0 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:01.183 Found net devices under 0000:af:00.1: cvl_0_1 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.183 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:01.184 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.184 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.184 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:01.184 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:01.184 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.184 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:01.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:33:01.443 00:33:01.443 --- 10.0.0.2 ping statistics --- 00:33:01.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.443 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:33:01.443 00:33:01.443 --- 10.0.0.1 ping statistics --- 00:33:01.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.443 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1559593 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1559593 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 1559593 ']' 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:01.443 14:01:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:01.703 [2024-06-10 14:01:15.936857] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:33:01.703 [2024-06-10 14:01:15.936921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.703 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.703 [2024-06-10 14:01:16.063534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:01.703 [2024-06-10 14:01:16.150952] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.703 [2024-06-10 14:01:16.150996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.703 [2024-06-10 14:01:16.151010] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.703 [2024-06-10 14:01:16.151022] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.703 [2024-06-10 14:01:16.151032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.703 [2024-06-10 14:01:16.151086] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.703 [2024-06-10 14:01:16.151109] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.703 [2024-06-10 14:01:16.151241] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:33:01.703 [2024-06-10 14:01:16.151241] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:02.638 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:05.926 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:33:05.926 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:33:05.926 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:33:05.926 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:06.186 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:33:06.186 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:33:06.186 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:33:06.186 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:33:06.186 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:33:06.186 [2024-06-10 14:01:20.653910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.446 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:06.446 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:06.446 14:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:06.704 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:06.704 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:06.963 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.222 [2024-06-10 14:01:21.549351] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.222 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:07.481 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:33:07.481 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:33:07.481 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:33:07.481 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:33:08.857 Initializing NVMe Controllers 00:33:08.857 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:33:08.857 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:33:08.857 Initialization complete. Launching workers. 00:33:08.857 ======================================================== 00:33:08.857 Latency(us) 00:33:08.857 Device Information : IOPS MiB/s Average min max 00:33:08.857 PCIE (0000:d8:00.0) NSID 1 from core 0: 76940.92 300.55 415.29 44.43 5295.81 00:33:08.857 ======================================================== 00:33:08.857 Total : 76940.92 300.55 415.29 44.43 5295.81 00:33:08.857 00:33:08.857 14:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:08.857 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.237 Initializing NVMe Controllers 00:33:10.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:10.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:10.237 Initialization complete. Launching workers. 00:33:10.237 ======================================================== 00:33:10.237 Latency(us) 00:33:10.237 Device Information : IOPS MiB/s Average min max 00:33:10.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.00 0.32 12606.55 236.07 44734.30 00:33:10.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21301.39 5987.14 50852.59 00:33:10.237 ======================================================== 00:33:10.237 Total : 130.00 0.51 15816.95 236.07 50852.59 00:33:10.237 00:33:10.237 14:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:10.237 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.616 Initializing NVMe Controllers 00:33:11.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:11.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:11.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:11.616 Initialization complete. Launching workers. 00:33:11.616 ======================================================== 00:33:11.616 Latency(us) 00:33:11.616 Device Information : IOPS MiB/s Average min max 00:33:11.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8321.00 32.50 3847.24 531.60 9638.98 00:33:11.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3749.00 14.64 8580.51 5282.45 23239.52 00:33:11.616 ======================================================== 00:33:11.616 Total : 12070.00 47.15 5317.42 531.60 23239.52 00:33:11.616 00:33:11.616 14:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:33:11.616 14:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:33:11.616 14:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.150 Initializing NVMe Controllers 00:33:14.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:14.150 Controller IO queue size 128, less than required. 00:33:14.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:14.150 Controller IO queue size 128, less than required. 00:33:14.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:14.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:14.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:14.150 Initialization complete. Launching workers. 00:33:14.150 ======================================================== 00:33:14.150 Latency(us) 00:33:14.150 Device Information : IOPS MiB/s Average min max 00:33:14.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 877.06 219.27 150851.24 82461.17 237909.42 00:33:14.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.50 152.13 218664.72 103952.60 341911.15 00:33:14.150 ======================================================== 00:33:14.150 Total : 1485.57 371.39 178628.33 82461.17 341911.15 00:33:14.150 00:33:14.150 14:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:33:14.150 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.150 No valid NVMe controllers or AIO or URING devices found 00:33:14.150 Initializing NVMe Controllers 00:33:14.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:14.150 Controller IO queue size 128, less than required. 00:33:14.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:14.150 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:33:14.150 Controller IO queue size 128, less than required. 00:33:14.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:14.150 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:33:14.150 WARNING: Some requested NVMe devices were skipped 00:33:14.150 14:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:33:14.150 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.703 Initializing NVMe Controllers 00:33:16.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.703 Controller IO queue size 128, less than required. 00:33:16.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:16.703 Controller IO queue size 128, less than required. 00:33:16.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:16.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:16.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:16.703 Initialization complete. Launching workers. 00:33:16.703 00:33:16.703 ==================== 00:33:16.703 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:33:16.703 TCP transport: 00:33:16.703 polls: 29614 00:33:16.703 idle_polls: 9535 00:33:16.703 sock_completions: 20079 00:33:16.703 nvme_completions: 4063 00:33:16.703 submitted_requests: 6106 00:33:16.703 queued_requests: 1 00:33:16.703 00:33:16.703 ==================== 00:33:16.703 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:33:16.703 TCP transport: 00:33:16.703 polls: 26268 00:33:16.703 idle_polls: 7431 00:33:16.703 sock_completions: 18837 00:33:16.703 nvme_completions: 3891 00:33:16.703 submitted_requests: 5848 00:33:16.703 queued_requests: 1 00:33:16.703 ======================================================== 00:33:16.703 Latency(us) 00:33:16.703 Device Information : IOPS MiB/s Average min max 00:33:16.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1015.41 253.85 132292.16 83464.46 206668.06 00:33:16.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 972.41 243.10 134809.12 49639.34 176357.93 00:33:16.703 ======================================================== 00:33:16.703 Total : 1987.81 496.95 133523.42 49639.34 206668.06 00:33:16.703 00:33:16.703 14:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:33:16.703 14:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:16.963 rmmod nvme_tcp 00:33:16.963 rmmod nvme_fabrics 00:33:16.963 rmmod nvme_keyring 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1559593 ']' 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1559593 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 1559593 ']' 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 1559593 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1559593 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1559593' 00:33:16.963 killing process with pid 1559593 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 1559593 00:33:16.963 14:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 1559593 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.497 14:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.400 14:01:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:21.400 00:33:21.400 real 0m28.441s 00:33:21.400 user 1m10.226s 00:33:21.400 sys 0m10.205s 00:33:21.400 14:01:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:21.400 14:01:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:21.400 ************************************ 00:33:21.400 END TEST nvmf_perf 00:33:21.400 ************************************ 00:33:21.400 14:01:35 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:21.400 14:01:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:21.400 14:01:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:21.400 14:01:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.400 ************************************ 00:33:21.400 START TEST nvmf_fio_host 00:33:21.400 ************************************ 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:21.400 * Looking for test storage... 00:33:21.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.400 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:21.401 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:31.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:31.398 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:31.399 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:31.399 Found net devices under 0000:af:00.0: cvl_0_0 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:31.399 Found net devices under 0000:af:00.1: cvl_0_1 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:31.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:33:31.399 00:33:31.399 --- 10.0.0.2 ping statistics --- 00:33:31.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.399 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:33:31.399 00:33:31.399 --- 10.0.0.1 ping statistics --- 00:33:31.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.399 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1566995 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1566995 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 1566995 ']' 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:31.399 14:01:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.399 [2024-06-10 14:01:44.472088] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:33:31.399 [2024-06-10 14:01:44.472154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.399 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.399 [2024-06-10 14:01:44.600408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.399 [2024-06-10 14:01:44.687566] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.399 [2024-06-10 14:01:44.687615] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.399 [2024-06-10 14:01:44.687628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.399 [2024-06-10 14:01:44.687640] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.399 [2024-06-10 14:01:44.687650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.399 [2024-06-10 14:01:44.687753] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.399 [2024-06-10 14:01:44.687855] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.399 [2024-06-10 14:01:44.687964] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.399 [2024-06-10 14:01:44.687964] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.399 14:01:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:31.399 14:01:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:33:31.400 14:01:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:31.400 [2024-06-10 14:01:45.587308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.400 14:01:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:31.400 14:01:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:31.400 14:01:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.400 14:01:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:31.658 Malloc1 00:33:31.658 14:01:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.917 14:01:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:32.175 14:01:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.175 [2024-06-10 14:01:46.610858] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.175 14:01:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:32.433 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:32.717 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:32.717 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:32.717 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:32.717 14:01:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:32.978 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:32.978 fio-3.35 00:33:32.978 Starting 1 thread 00:33:32.978 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.514 00:33:35.514 test: (groupid=0, jobs=1): err= 0: pid=1567615: Mon Jun 10 14:01:49 2024 00:33:35.514 read: IOPS=8883, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec) 00:33:35.514 slat (usec): min=2, max=254, avg= 2.27, stdev= 2.59 00:33:35.514 clat (usec): min=2827, max=13568, avg=7955.20, stdev=606.28 00:33:35.514 lat (usec): min=2861, max=13570, avg=7957.47, stdev=606.07 00:33:35.514 clat percentiles (usec): 00:33:35.514 | 1.00th=[ 6587], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7504], 00:33:35.514 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:33:35.514 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:33:35.514 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11994], 99.95th=[12780], 00:33:35.514 | 99.99th=[13566] 00:33:35.514 bw ( KiB/s): min=34720, max=36040, per=99.96%, avg=35522.00, stdev=567.06, samples=4 00:33:35.514 iops : min= 8680, max= 9010, avg=8880.50, stdev=141.76, samples=4 00:33:35.514 write: IOPS=8899, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2007msec); 0 zone resets 00:33:35.514 slat (usec): min=2, max=224, avg= 2.37, stdev= 1.85 00:33:35.514 clat (usec): min=2446, max=12699, avg=6386.58, stdev=514.68 00:33:35.514 lat (usec): min=2462, max=12702, avg=6388.95, stdev=514.52 00:33:35.514 clat percentiles (usec): 00:33:35.514 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:33:35.514 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:33:35.514 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:33:35.514 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[10421], 99.95th=[11994], 00:33:35.514 | 99.99th=[12649] 00:33:35.514 bw ( KiB/s): min=35432, max=35840, per=100.00%, avg=35606.00, stdev=180.00, samples=4 00:33:35.514 iops : min= 8858, max= 8960, avg=8901.50, stdev=45.00, samples=4 00:33:35.514 lat (msec) : 4=0.10%, 10=99.73%, 20=0.17% 00:33:35.514 cpu : usr=63.01%, sys=30.96%, ctx=32, majf=0, minf=5 00:33:35.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:35.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:35.514 issued rwts: total=17830,17861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:35.514 00:33:35.514 Run status group 0 (all jobs): 00:33:35.514 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:33:35.514 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2007-2007msec 00:33:35.514 14:01:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:35.514 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:35.514 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:35.514 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:35.515 14:01:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:35.772 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:35.772 fio-3.35 00:33:35.772 Starting 1 thread 00:33:35.772 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.301 00:33:38.301 test: (groupid=0, jobs=1): err= 0: pid=1568108: Mon Jun 10 14:01:52 2024 00:33:38.301 read: IOPS=8693, BW=136MiB/s (142MB/s)(273MiB/2006msec) 00:33:38.301 slat (usec): min=3, max=114, avg= 3.74, stdev= 1.65 00:33:38.301 clat (usec): min=2504, max=21955, avg=8723.59, stdev=2265.67 00:33:38.301 lat (usec): min=2507, max=21958, avg=8727.33, stdev=2265.88 00:33:38.301 clat percentiles (usec): 00:33:38.301 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6718], 00:33:38.301 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9110], 00:33:38.301 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11731], 95.00th=[12780], 00:33:38.301 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16712], 99.95th=[16712], 00:33:38.301 | 99.99th=[17695] 00:33:38.301 bw ( KiB/s): min=55520, max=87776, per=51.88%, avg=72168.00, stdev=17160.36, samples=4 00:33:38.301 iops : min= 3470, max= 5486, avg=4510.50, stdev=1072.52, samples=4 00:33:38.301 write: IOPS=5300, BW=82.8MiB/s (86.8MB/s)(147MiB/1771msec); 0 zone resets 00:33:38.301 slat (usec): min=40, max=385, avg=41.62, stdev= 6.69 00:33:38.301 clat (usec): min=4127, max=18115, avg=10122.05, stdev=1826.58 00:33:38.301 lat (usec): min=4167, max=18160, avg=10163.66, stdev=1827.55 00:33:38.301 clat percentiles (usec): 00:33:38.301 | 1.00th=[ 6915], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8586], 00:33:38.301 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10290], 00:33:38.301 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12649], 95.00th=[13698], 00:33:38.301 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17695], 99.95th=[17695], 00:33:38.301 | 99.99th=[18220] 00:33:38.301 bw ( KiB/s): min=59136, max=90368, per=88.55%, avg=75096.00, stdev=17269.04, samples=4 00:33:38.301 iops : min= 3696, max= 5648, avg=4693.50, stdev=1079.32, samples=4 00:33:38.301 lat (msec) : 4=0.18%, 10=65.97%, 20=33.85%, 50=0.01% 00:33:38.301 cpu : usr=86.93%, sys=11.67%, ctx=18, majf=0, minf=2 00:33:38.301 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:38.301 issued rwts: total=17440,9387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:38.301 00:33:38.301 Run status group 0 (all jobs): 00:33:38.301 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=273MiB (286MB), run=2006-2006msec 00:33:38.301 WRITE: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=147MiB (154MB), run=1771-1771msec 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:38.302 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:38.302 rmmod nvme_tcp 00:33:38.302 rmmod nvme_fabrics 00:33:38.560 rmmod nvme_keyring 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1566995 ']' 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1566995 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 1566995 ']' 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 1566995 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1566995 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1566995' 00:33:38.560 killing process with pid 1566995 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 1566995 00:33:38.560 14:01:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 1566995 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:38.819 14:01:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.726 14:01:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:40.726 00:33:40.726 real 0m19.537s 00:33:40.726 user 1m1.785s 00:33:40.726 sys 0m9.157s 00:33:40.726 14:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:40.726 14:01:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.726 ************************************ 00:33:40.726 END TEST nvmf_fio_host 00:33:40.726 ************************************ 00:33:40.985 14:01:55 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:40.985 14:01:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:40.985 14:01:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:40.985 14:01:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:40.985 ************************************ 00:33:40.985 START TEST nvmf_failover 00:33:40.985 ************************************ 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:40.985 * Looking for test storage... 00:33:40.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.985 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:40.986 14:01:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:50.970 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:50.970 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.970 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:50.971 Found net devices under 0000:af:00.0: cvl_0_0 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:50.971 Found net devices under 0000:af:00.1: cvl_0_1 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:50.971 14:02:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:50.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:33:50.971 00:33:50.971 --- 10.0.0.2 ping statistics --- 00:33:50.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.971 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:33:50.971 00:33:50.971 --- 10.0.0.1 ping statistics --- 00:33:50.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.971 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1573042 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1573042 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1573042 ']' 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:50.971 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.971 [2024-06-10 14:02:04.235719] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:33:50.971 [2024-06-10 14:02:04.235779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.971 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.971 [2024-06-10 14:02:04.352806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:50.971 [2024-06-10 14:02:04.438678] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.971 [2024-06-10 14:02:04.438719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.971 [2024-06-10 14:02:04.438733] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.971 [2024-06-10 14:02:04.438745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.971 [2024-06-10 14:02:04.438754] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.971 [2024-06-10 14:02:04.438864] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.971 [2024-06-10 14:02:04.438978] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.971 [2024-06-10 14:02:04.438978] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:50.971 [2024-06-10 14:02:05.389241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.971 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:51.229 Malloc0 00:33:51.229 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:51.488 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.747 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.004 [2024-06-10 14:02:06.355659] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.004 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:52.261 [2024-06-10 14:02:06.592385] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:52.261 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:52.520 [2024-06-10 14:02:06.829159] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1573505 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1573505 /var/tmp/bdevperf.sock 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1573505 ']' 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:52.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:52.520 14:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:53.453 14:02:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:53.453 14:02:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:33:53.453 14:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.019 NVMe0n1 00:33:54.019 14:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.277 00:33:54.534 14:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1573865 00:33:54.534 14:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:54.534 14:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:55.468 14:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.727 [2024-06-10 14:02:09.970902] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.970953] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.970964] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.970973] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.970982] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.970990] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.970998] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971007] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971015] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971023] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971031] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971045] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971053] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971061] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971070] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971078] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971086] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971094] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971102] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971111] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971119] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971127] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971136] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971144] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971152] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971161] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971170] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971178] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971187] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971195] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971203] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971212] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971220] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971228] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971238] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971248] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971256] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971265] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971275] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971284] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971292] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971300] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971309] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971317] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971325] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971333] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971342] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971350] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971358] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971366] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971374] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971383] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971391] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971399] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971408] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971416] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971424] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971432] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971441] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971450] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971458] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971467] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971476] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971484] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971492] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971502] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971510] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971519] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971528] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 [2024-06-10 14:02:09.971536] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bf90 is same with the state(5) to be set 00:33:55.727 14:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:59.068 14:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.068 00:33:59.068 14:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:59.327 [2024-06-10 14:02:13.715175] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715236] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715250] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715262] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715274] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715286] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715298] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715309] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715321] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715332] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715343] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715355] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715367] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715378] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715390] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715401] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715413] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715424] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715435] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715456] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715468] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715480] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715492] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715503] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715515] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715526] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715538] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715550] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715562] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715573] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715591] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715604] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715615] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715639] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715651] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715663] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715674] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715686] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715697] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715709] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715720] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715732] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715743] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715755] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715766] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715780] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715792] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715803] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715815] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715826] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715838] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715850] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715861] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715873] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715884] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715896] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715907] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715918] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.327 [2024-06-10 14:02:13.715930] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.715941] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.715952] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.715963] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.715975] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.715986] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.715997] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716009] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716020] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716032] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716043] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716054] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716066] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716077] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716090] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716102] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716113] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716125] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716136] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716148] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716159] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716170] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716182] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716193] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716205] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716217] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716229] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716240] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 [2024-06-10 14:02:13.716251] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206d540 is same with the state(5) to be set 00:33:59.328 14:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:02.612 14:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.612 [2024-06-10 14:02:16.963604] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.612 14:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:03.547 14:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:03.806 [2024-06-10 14:02:18.208826] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 [2024-06-10 14:02:18.208879] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 [2024-06-10 14:02:18.208893] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 [2024-06-10 14:02:18.208905] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 [2024-06-10 14:02:18.208917] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 [2024-06-10 14:02:18.208928] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 [2024-06-10 14:02:18.208940] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dc40 is same with the state(5) to be set 00:34:03.806 14:02:18 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1573865 00:34:10.372 0 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1573505 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1573505 ']' 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1573505 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1573505 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1573505' 00:34:10.372 killing process with pid 1573505 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1573505 00:34:10.372 14:02:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1573505 00:34:10.372 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:10.372 [2024-06-10 14:02:06.910922] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:34:10.372 [2024-06-10 14:02:06.910993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573505 ] 00:34:10.372 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.372 [2024-06-10 14:02:07.031582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.372 [2024-06-10 14:02:07.114698] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.372 Running I/O for 15 seconds... 00:34:10.372 [2024-06-10 14:02:09.972296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.372 [2024-06-10 14:02:09.972530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.372 [2024-06-10 14:02:09.972542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.972983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.972998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.373 [2024-06-10 14:02:09.973649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.373 [2024-06-10 14:02:09.973662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.973974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.973986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.974014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.974042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.974069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.374 [2024-06-10 14:02:09.974312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.374 [2024-06-10 14:02:09.974750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.374 [2024-06-10 14:02:09.974762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.974976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.974991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.375 [2024-06-10 14:02:09.975004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.375 [2024-06-10 14:02:09.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.375 [2024-06-10 14:02:09.975845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.376 [2024-06-10 14:02:09.975856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.376 [2024-06-10 14:02:09.975868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96536 len:8 PRP1 0x0 PRP2 0x0 00:34:10.376 [2024-06-10 14:02:09.975881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:09.975933] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7bd8b0 was disconnected and freed. reset controller. 00:34:10.376 [2024-06-10 14:02:09.975948] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:10.376 [2024-06-10 14:02:09.975975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.376 [2024-06-10 14:02:09.975989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:09.976002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.376 [2024-06-10 14:02:09.976015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:09.976028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.376 [2024-06-10 14:02:09.976041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:09.976054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.376 [2024-06-10 14:02:09.976066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:09.976079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.376 [2024-06-10 14:02:09.976113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79f3a0 (9): Bad file descriptor 00:34:10.376 [2024-06-10 14:02:09.979874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.376 [2024-06-10 14:02:10.138572] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.376 [2024-06-10 14:02:13.718875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.376 [2024-06-10 14:02:13.718919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.718941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.718954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.718974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.718987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.376 [2024-06-10 14:02:13.719762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.376 [2024-06-10 14:02:13.719774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.377 [2024-06-10 14:02:13.719962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.719976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.719990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.377 [2024-06-10 14:02:13.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.377 [2024-06-10 14:02:13.720623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.378 [2024-06-10 14:02:13.720825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.720862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115368 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.720875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.720901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.720912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115376 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.720924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.720946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.720957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115384 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.720969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.720982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.720992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115392 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115400 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115408 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115416 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115424 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115432 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115440 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115448 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115456 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115464 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115472 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115480 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115488 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115496 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115504 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.378 [2024-06-10 14:02:13.721679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.378 [2024-06-10 14:02:13.721690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115512 len:8 PRP1 0x0 PRP2 0x0 00:34:10.378 [2024-06-10 14:02:13.721702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.378 [2024-06-10 14:02:13.721714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.721734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115520 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.721748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.721762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.721783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115528 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.721795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.721809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.721829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115536 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.721841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.721854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.721874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115544 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.721886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.721899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.721918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115552 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.721931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.721943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.721963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115560 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.721975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.721988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.721997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115568 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115576 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115584 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115592 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115600 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115608 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115616 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115624 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115632 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115640 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115648 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115656 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115664 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115672 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.722630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.722640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.722651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115680 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.722663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.734052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.734071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.734085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115688 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.734102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.734120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.734133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.734147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115696 len:8 PRP1 0x0 PRP2 0x0 00:34:10.379 [2024-06-10 14:02:13.734164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.379 [2024-06-10 14:02:13.734182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.379 [2024-06-10 14:02:13.734195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.379 [2024-06-10 14:02:13.734210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115704 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115712 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115720 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115728 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115736 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115744 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115752 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115760 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115768 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115776 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115784 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115792 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.734943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.734956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.734970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115800 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.734987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.735018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.735032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115808 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.735049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.380 [2024-06-10 14:02:13.735080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.380 [2024-06-10 14:02:13.735094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115816 len:8 PRP1 0x0 PRP2 0x0 00:34:10.380 [2024-06-10 14:02:13.735111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735172] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7996a0 was disconnected and freed. reset controller. 00:34:10.380 [2024-06-10 14:02:13.735191] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:10.380 [2024-06-10 14:02:13.735228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.380 [2024-06-10 14:02:13.735247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.380 [2024-06-10 14:02:13.735285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.380 [2024-06-10 14:02:13.735321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.380 [2024-06-10 14:02:13.735356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:13.735373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.380 [2024-06-10 14:02:13.735410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79f3a0 (9): Bad file descriptor 00:34:10.380 [2024-06-10 14:02:13.740563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.380 [2024-06-10 14:02:13.812692] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.380 [2024-06-10 14:02:18.211816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.211862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.211884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.211899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.211914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.211927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.211942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.211955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.211970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.211982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.211997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.212009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.212024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.212037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.212051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.380 [2024-06-10 14:02:18.212084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.380 [2024-06-10 14:02:18.212101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.381 [2024-06-10 14:02:18.212705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.381 [2024-06-10 14:02:18.212719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.212974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.212988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.383 [2024-06-10 14:02:18.213403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.383 [2024-06-10 14:02:18.213445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37928 len:8 PRP1 0x0 PRP2 0x0 00:34:10.383 [2024-06-10 14:02:18.213458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.383 [2024-06-10 14:02:18.213486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.383 [2024-06-10 14:02:18.213497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:34:10.383 [2024-06-10 14:02:18.213509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.383 [2024-06-10 14:02:18.213532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.383 [2024-06-10 14:02:18.213542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37944 len:8 PRP1 0x0 PRP2 0x0 00:34:10.383 [2024-06-10 14:02:18.213554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.383 [2024-06-10 14:02:18.213581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.383 [2024-06-10 14:02:18.213592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37952 len:8 PRP1 0x0 PRP2 0x0 00:34:10.383 [2024-06-10 14:02:18.213604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.383 [2024-06-10 14:02:18.213626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.383 [2024-06-10 14:02:18.213638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37960 len:8 PRP1 0x0 PRP2 0x0 00:34:10.383 [2024-06-10 14:02:18.213650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.383 [2024-06-10 14:02:18.213672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.383 [2024-06-10 14:02:18.213683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37968 len:8 PRP1 0x0 PRP2 0x0 00:34:10.383 [2024-06-10 14:02:18.213695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.383 [2024-06-10 14:02:18.213708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.383 [2024-06-10 14:02:18.213717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.213728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37976 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.213740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.213753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.213763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37984 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.213785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.213798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.213807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.213818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37992 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.213845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.213854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.213864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38000 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.213877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.213889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.213899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.213909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38008 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.213921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.213934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.213944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.213954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38016 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.213966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.213979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.213989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38024 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38032 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38040 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38048 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38056 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38064 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38072 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38080 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38088 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38096 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38104 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38112 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38120 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38128 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38136 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38144 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38152 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.384 [2024-06-10 14:02:18.214759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.384 [2024-06-10 14:02:18.214769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.384 [2024-06-10 14:02:18.214779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38160 len:8 PRP1 0x0 PRP2 0x0 00:34:10.384 [2024-06-10 14:02:18.214791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.214804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.214814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.214824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38168 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.214836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.214849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.214859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.214869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38176 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.214881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.214896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.214906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.214916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38184 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.214928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.214941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.214950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.214961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38192 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.214973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.214986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.214995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38200 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38208 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38216 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38224 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38232 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38240 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38248 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38256 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38264 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38272 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38280 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38288 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38296 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38304 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38312 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38320 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38328 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38336 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38344 len:8 PRP1 0x0 PRP2 0x0 00:34:10.385 [2024-06-10 14:02:18.215845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.385 [2024-06-10 14:02:18.215858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.385 [2024-06-10 14:02:18.215868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.385 [2024-06-10 14:02:18.215879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38352 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.215891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.215904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.215914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.215924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38360 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.215937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.215951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.215961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.215972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38368 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.215984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.215996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.216006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.216016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38376 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.216029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.216042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.216051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38384 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38392 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38400 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38408 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38416 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38424 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37480 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37488 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37496 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37504 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37512 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37520 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.227945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.227963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.227976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.227991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37528 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.228008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.228025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.386 [2024-06-10 14:02:18.228039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.386 [2024-06-10 14:02:18.228056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37536 len:8 PRP1 0x0 PRP2 0x0 00:34:10.386 [2024-06-10 14:02:18.228073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.228134] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x79b110 was disconnected and freed. reset controller. 00:34:10.386 [2024-06-10 14:02:18.228153] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:10.386 [2024-06-10 14:02:18.228190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.386 [2024-06-10 14:02:18.228208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.228227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.386 [2024-06-10 14:02:18.228244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.228263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.386 [2024-06-10 14:02:18.228280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.228297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.386 [2024-06-10 14:02:18.228315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.386 [2024-06-10 14:02:18.228332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.386 [2024-06-10 14:02:18.228385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79f3a0 (9): Bad file descriptor 00:34:10.386 [2024-06-10 14:02:18.233547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.386 [2024-06-10 14:02:18.304255] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.386 00:34:10.386 Latency(us) 00:34:10.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.386 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:10.386 Verification LBA range: start 0x0 length 0x4000 00:34:10.386 NVMe0n1 : 15.00 8383.79 32.75 675.86 0.00 14098.48 838.86 25060.97 00:34:10.386 =================================================================================================================== 00:34:10.386 Total : 8383.79 32.75 675.86 0.00 14098.48 838.86 25060.97 00:34:10.386 Received shutdown signal, test time was about 15.000000 seconds 00:34:10.386 00:34:10.386 Latency(us) 00:34:10.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.387 =================================================================================================================== 00:34:10.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1576276 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1576276 /var/tmp/bdevperf.sock 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1576276 ']' 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:10.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:10.387 14:02:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.954 14:02:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:10.954 14:02:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:34:10.954 14:02:25 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:10.954 [2024-06-10 14:02:25.363583] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:10.954 14:02:25 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:11.212 [2024-06-10 14:02:25.600314] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:11.212 14:02:25 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:11.777 NVMe0n1 00:34:11.777 14:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.034 00:34:12.034 14:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.292 00:34:12.292 14:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:12.292 14:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:12.549 14:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.807 14:02:27 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:16.086 14:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:16.086 14:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:16.086 14:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:16.086 14:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1577339 00:34:16.086 14:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1577339 00:34:17.460 0 00:34:17.460 14:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:17.460 [2024-06-10 14:02:24.240708] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:34:17.460 [2024-06-10 14:02:24.240778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576276 ] 00:34:17.460 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.460 [2024-06-10 14:02:24.359878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.460 [2024-06-10 14:02:24.436496] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.460 [2024-06-10 14:02:27.198406] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:17.460 [2024-06-10 14:02:27.198463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.460 [2024-06-10 14:02:27.198480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.460 [2024-06-10 14:02:27.198495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.460 [2024-06-10 14:02:27.198508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.460 [2024-06-10 14:02:27.198522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.460 [2024-06-10 14:02:27.198534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.460 [2024-06-10 14:02:27.198548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.460 [2024-06-10 14:02:27.198561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.460 [2024-06-10 14:02:27.198573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.460 [2024-06-10 14:02:27.198610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.460 [2024-06-10 14:02:27.198630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a63a0 (9): Bad file descriptor 00:34:17.460 [2024-06-10 14:02:27.244300] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:17.460 Running I/O for 1 seconds... 00:34:17.460 00:34:17.460 Latency(us) 00:34:17.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.460 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:17.460 Verification LBA range: start 0x0 length 0x4000 00:34:17.460 NVMe0n1 : 1.01 8292.96 32.39 0.00 0.00 15369.21 1677.72 16357.79 00:34:17.460 =================================================================================================================== 00:34:17.460 Total : 8292.96 32.39 0.00 0.00 15369.21 1677.72 16357.79 00:34:17.460 14:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:17.460 14:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:17.460 14:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.718 14:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:17.718 14:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:17.974 14:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:18.232 14:02:32 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1576276 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1576276 ']' 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1576276 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1576276 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1576276' 00:34:21.512 killing process with pid 1576276 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1576276 00:34:21.512 14:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1576276 00:34:21.770 14:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:21.770 14:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:22.028 rmmod nvme_tcp 00:34:22.028 rmmod nvme_fabrics 00:34:22.028 rmmod nvme_keyring 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:22.028 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1573042 ']' 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1573042 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1573042 ']' 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1573042 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1573042 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1573042' 00:34:22.029 killing process with pid 1573042 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1573042 00:34:22.029 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1573042 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.287 14:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.820 14:02:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:24.820 00:34:24.820 real 0m43.488s 00:34:24.820 user 2m11.127s 00:34:24.820 sys 0m11.633s 00:34:24.820 14:02:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:24.820 14:02:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:24.820 ************************************ 00:34:24.820 END TEST nvmf_failover 00:34:24.820 ************************************ 00:34:24.820 14:02:38 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:24.820 14:02:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:24.820 14:02:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:24.820 14:02:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:24.820 ************************************ 00:34:24.820 START TEST nvmf_host_discovery 00:34:24.820 ************************************ 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:24.820 * Looking for test storage... 00:34:24.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.820 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:34:24.821 14:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.934 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:32.935 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:32.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:32.935 Found net devices under 0000:af:00.0: cvl_0_0 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:32.935 Found net devices under 0000:af:00.1: cvl_0_1 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.935 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:34:33.194 00:34:33.194 --- 10.0.0.2 ping statistics --- 00:34:33.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.194 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:34:33.194 00:34:33.194 --- 10.0.0.1 ping statistics --- 00:34:33.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.194 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1582843 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1582843 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1582843 ']' 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:33.194 14:02:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.453 [2024-06-10 14:02:47.716267] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:34:33.453 [2024-06-10 14:02:47.716335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.453 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.453 [2024-06-10 14:02:47.833011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.453 [2024-06-10 14:02:47.917619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.453 [2024-06-10 14:02:47.917661] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.453 [2024-06-10 14:02:47.917675] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.453 [2024-06-10 14:02:47.917687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.453 [2024-06-10 14:02:47.917697] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.453 [2024-06-10 14:02:47.917722] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 [2024-06-10 14:02:48.673931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 [2024-06-10 14:02:48.686134] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 null0 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 null1 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1582953 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1582953 /tmp/host.sock 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1582953 ']' 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:34.492 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:34.492 14:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.492 [2024-06-10 14:02:48.768013] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:34:34.492 [2024-06-10 14:02:48.768073] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582953 ] 00:34:34.492 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.492 [2024-06-10 14:02:48.890473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.751 [2024-06-10 14:02:48.976400] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.317 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:35.575 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.576 14:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.576 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:35.576 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.576 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.576 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.576 [2024-06-10 14:02:50.037911] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.576 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.576 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:34:35.834 14:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:34:36.400 [2024-06-10 14:02:50.735512] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:36.400 [2024-06-10 14:02:50.735536] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:36.400 [2024-06-10 14:02:50.735555] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:36.400 [2024-06-10 14:02:50.862991] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:36.658 [2024-06-10 14:02:51.087238] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:36.658 [2024-06-10 14:02:51.087263] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.917 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.176 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.177 [2024-06-10 14:02:51.590328] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:37.177 [2024-06-10 14:02:51.590975] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:37.177 [2024-06-10 14:02:51.591003] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:37.177 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:37.434 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.435 [2024-06-10 14:02:51.717400] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:37.435 14:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:34:37.435 [2024-06-10 14:02:51.778949] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:37.435 [2024-06-10 14:02:51.778971] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:37.435 [2024-06-10 14:02:51.778981] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:38.366 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.625 [2024-06-10 14:02:52.866406] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:38.625 [2024-06-10 14:02:52.866433] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:38.625 [2024-06-10 14:02:52.868984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.625 [2024-06-10 14:02:52.869007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.625 [2024-06-10 14:02:52.869022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.625 [2024-06-10 14:02:52.869035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.625 [2024-06-10 14:02:52.869048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.625 [2024-06-10 14:02:52.869061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.625 [2024-06-10 14:02:52.869074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:38.625 [2024-06-10 14:02:52.869086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.625 [2024-06-10 14:02:52.869099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:38.625 [2024-06-10 14:02:52.878996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.625 [2024-06-10 14:02:52.889040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.625 [2024-06-10 14:02:52.889447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.625 [2024-06-10 14:02:52.889468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.625 [2024-06-10 14:02:52.889483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.625 [2024-06-10 14:02:52.889501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.625 [2024-06-10 14:02:52.889539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.625 [2024-06-10 14:02:52.889553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.625 [2024-06-10 14:02:52.889567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.625 [2024-06-10 14:02:52.889589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.625 [2024-06-10 14:02:52.899106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.625 [2024-06-10 14:02:52.899396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.625 [2024-06-10 14:02:52.899415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.625 [2024-06-10 14:02:52.899428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.625 [2024-06-10 14:02:52.899446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.625 [2024-06-10 14:02:52.899463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.625 [2024-06-10 14:02:52.899474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.625 [2024-06-10 14:02:52.899486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.625 [2024-06-10 14:02:52.899501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.625 [2024-06-10 14:02:52.909168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.625 [2024-06-10 14:02:52.909550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.625 [2024-06-10 14:02:52.909572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.625 [2024-06-10 14:02:52.909590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.625 [2024-06-10 14:02:52.909608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.625 [2024-06-10 14:02:52.909666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.625 [2024-06-10 14:02:52.909682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.625 [2024-06-10 14:02:52.909694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.625 [2024-06-10 14:02:52.909710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.625 [2024-06-10 14:02:52.919230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.625 [2024-06-10 14:02:52.919571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.625 [2024-06-10 14:02:52.919597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.625 [2024-06-10 14:02:52.919610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.625 [2024-06-10 14:02:52.919628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.625 [2024-06-10 14:02:52.919645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.625 [2024-06-10 14:02:52.919656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.625 [2024-06-10 14:02:52.919669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.625 [2024-06-10 14:02:52.919684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.625 [2024-06-10 14:02:52.929293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.625 [2024-06-10 14:02:52.929567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.625 [2024-06-10 14:02:52.929592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.625 [2024-06-10 14:02:52.929605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.625 [2024-06-10 14:02:52.929622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.625 [2024-06-10 14:02:52.929638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.625 [2024-06-10 14:02:52.929649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.625 [2024-06-10 14:02:52.929662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.625 [2024-06-10 14:02:52.929677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:38.625 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:38.626 [2024-06-10 14:02:52.939353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.626 [2024-06-10 14:02:52.939721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.626 [2024-06-10 14:02:52.939742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.626 [2024-06-10 14:02:52.939755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.626 [2024-06-10 14:02:52.939772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.626 [2024-06-10 14:02:52.939798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.626 [2024-06-10 14:02:52.939811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.626 [2024-06-10 14:02:52.939823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.626 [2024-06-10 14:02:52.939839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.626 [2024-06-10 14:02:52.949417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:38.626 [2024-06-10 14:02:52.949721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.626 [2024-06-10 14:02:52.949741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b63d70 with addr=10.0.0.2, port=4420 00:34:38.626 [2024-06-10 14:02:52.949755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b63d70 is same with the state(5) to be set 00:34:38.626 [2024-06-10 14:02:52.949773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b63d70 (9): Bad file descriptor 00:34:38.626 [2024-06-10 14:02:52.949810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.626 [2024-06-10 14:02:52.949823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:38.626 [2024-06-10 14:02:52.949836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.626 [2024-06-10 14:02:52.949852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.626 [2024-06-10 14:02:52.954815] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:38.626 [2024-06-10 14:02:52.954836] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:38.626 14:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.626 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.883 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.883 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:38.883 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:38.883 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.883 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.883 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.884 14:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.257 [2024-06-10 14:02:54.325425] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:40.257 [2024-06-10 14:02:54.325446] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:40.257 [2024-06-10 14:02:54.325463] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:40.257 [2024-06-10 14:02:54.453906] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:40.257 [2024-06-10 14:02:54.559979] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:40.257 [2024-06-10 14:02:54.560013] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.257 request: 00:34:40.257 { 00:34:40.257 "name": "nvme", 00:34:40.257 "trtype": "tcp", 00:34:40.257 "traddr": "10.0.0.2", 00:34:40.257 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:40.257 "adrfam": "ipv4", 00:34:40.257 "trsvcid": "8009", 00:34:40.257 "wait_for_attach": true, 00:34:40.257 "method": "bdev_nvme_start_discovery", 00:34:40.257 "req_id": 1 00:34:40.257 } 00:34:40.257 Got JSON-RPC error response 00:34:40.257 response: 00:34:40.257 { 00:34:40.257 "code": -17, 00:34:40.257 "message": "File exists" 00:34:40.257 } 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.257 request: 00:34:40.257 { 00:34:40.257 "name": "nvme_second", 00:34:40.257 "trtype": "tcp", 00:34:40.257 "traddr": "10.0.0.2", 00:34:40.257 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:40.257 "adrfam": "ipv4", 00:34:40.257 "trsvcid": "8009", 00:34:40.257 "wait_for_attach": true, 00:34:40.257 "method": "bdev_nvme_start_discovery", 00:34:40.257 "req_id": 1 00:34:40.257 } 00:34:40.257 Got JSON-RPC error response 00:34:40.257 response: 00:34:40.257 { 00:34:40.257 "code": -17, 00:34:40.257 "message": "File exists" 00:34:40.257 } 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:40.257 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:40.515 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.516 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:40.516 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.516 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:40.516 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.516 14:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.448 [2024-06-10 14:02:55.824817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.448 [2024-06-10 14:02:55.824851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e480 with addr=10.0.0.2, port=8010 00:34:41.448 [2024-06-10 14:02:55.824870] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:41.448 [2024-06-10 14:02:55.824882] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:41.448 [2024-06-10 14:02:55.824893] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:42.382 [2024-06-10 14:02:56.827286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.382 [2024-06-10 14:02:56.827316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e480 with addr=10.0.0.2, port=8010 00:34:42.382 [2024-06-10 14:02:56.827333] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:42.382 [2024-06-10 14:02:56.827345] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:42.382 [2024-06-10 14:02:56.827356] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:43.756 [2024-06-10 14:02:57.829376] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:43.756 request: 00:34:43.756 { 00:34:43.756 "name": "nvme_second", 00:34:43.756 "trtype": "tcp", 00:34:43.756 "traddr": "10.0.0.2", 00:34:43.756 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:43.756 "adrfam": "ipv4", 00:34:43.756 "trsvcid": "8010", 00:34:43.756 "attach_timeout_ms": 3000, 00:34:43.756 "method": "bdev_nvme_start_discovery", 00:34:43.756 "req_id": 1 00:34:43.756 } 00:34:43.756 Got JSON-RPC error response 00:34:43.756 response: 00:34:43.756 { 00:34:43.756 "code": -110, 00:34:43.756 "message": "Connection timed out" 00:34:43.756 } 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1582953 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:43.756 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:43.757 rmmod nvme_tcp 00:34:43.757 rmmod nvme_fabrics 00:34:43.757 rmmod nvme_keyring 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1582843 ']' 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1582843 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 1582843 ']' 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 1582843 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:43.757 14:02:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1582843 00:34:43.757 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:43.757 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:43.757 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1582843' 00:34:43.757 killing process with pid 1582843 00:34:43.757 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 1582843 00:34:43.757 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 1582843 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:44.016 14:02:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.916 14:03:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:45.916 00:34:45.916 real 0m21.478s 00:34:45.916 user 0m23.540s 00:34:45.916 sys 0m8.831s 00:34:45.916 14:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:45.916 14:03:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.916 ************************************ 00:34:45.916 END TEST nvmf_host_discovery 00:34:45.916 ************************************ 00:34:45.916 14:03:00 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:45.916 14:03:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:45.916 14:03:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:45.916 14:03:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.175 ************************************ 00:34:46.175 START TEST nvmf_host_multipath_status 00:34:46.175 ************************************ 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:46.175 * Looking for test storage... 00:34:46.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:46.175 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:46.176 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:56.146 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:56.146 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:56.146 Found net devices under 0000:af:00.0: cvl_0_0 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:56.146 Found net devices under 0000:af:00.1: cvl_0_1 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:56.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:34:56.146 00:34:56.146 --- 10.0.0.2 ping statistics --- 00:34:56.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.146 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:34:56.146 00:34:56.146 --- 10.0.0.1 ping statistics --- 00:34:56.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.146 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1589639 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1589639 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1589639 ']' 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:56.146 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:56.147 [2024-06-10 14:03:09.488339] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:34:56.147 [2024-06-10 14:03:09.488403] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.147 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.147 [2024-06-10 14:03:09.615996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:56.147 [2024-06-10 14:03:09.698601] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.147 [2024-06-10 14:03:09.698650] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.147 [2024-06-10 14:03:09.698663] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.147 [2024-06-10 14:03:09.698675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.147 [2024-06-10 14:03:09.698686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.147 [2024-06-10 14:03:09.698738] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.147 [2024-06-10 14:03:09.698743] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1589639 00:34:56.147 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:56.404 [2024-06-10 14:03:10.644049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.404 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:56.662 Malloc0 00:34:56.662 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:56.662 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.919 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.177 [2024-06-10 14:03:11.565617] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.177 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:57.434 [2024-06-10 14:03:11.794271] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1590163 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1590163 /var/tmp/bdevperf.sock 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1590163 ']' 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:57.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:57.434 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:58.367 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:58.367 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:34:58.367 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:58.625 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:59.190 Nvme0n1 00:34:59.190 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:59.757 Nvme0n1 00:34:59.757 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:59.757 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:01.658 14:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:01.658 14:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:01.916 14:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:02.174 14:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:03.119 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:03.119 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:03.119 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.119 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:03.377 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.377 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:03.377 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.377 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.635 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.635 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.635 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.635 14:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:03.893 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.893 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:03.893 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.893 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:04.151 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.151 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:04.151 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.151 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:04.409 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.409 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:04.409 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.409 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:04.668 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.668 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:04.668 14:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:04.928 14:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:05.190 14:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:06.175 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:06.175 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:06.175 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.175 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:06.433 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.433 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:06.433 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.433 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:06.691 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.691 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:06.691 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.691 14:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.691 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.691 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.691 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.691 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:06.949 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.949 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:06.949 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.949 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.208 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.208 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:07.208 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.208 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:07.466 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.466 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:07.466 14:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:07.723 14:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:07.980 14:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:08.914 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:08.914 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:08.914 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.914 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:09.172 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.172 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:09.172 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.172 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:09.430 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.430 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:09.430 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.430 14:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:09.688 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.688 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:09.688 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.688 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:09.946 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.946 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:09.946 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.946 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:10.204 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.204 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:10.204 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.204 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:10.463 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.463 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:10.463 14:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:10.722 14:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:10.980 14:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:11.914 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:11.914 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:11.914 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.914 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:12.172 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.172 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:12.172 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.172 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:12.430 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:12.430 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:12.430 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.430 14:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:12.688 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.688 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:12.688 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.688 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.946 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:13.204 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.204 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:13.204 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:13.462 14:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:13.719 14:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:14.651 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:14.651 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:14.651 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.651 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:14.908 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.908 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:14.909 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.909 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:15.167 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.167 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:15.167 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.167 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:15.425 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.425 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:15.425 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.425 14:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:15.683 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.683 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:15.683 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.683 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:15.950 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.950 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:15.950 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.950 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:16.217 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.217 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:16.218 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:16.479 14:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:16.737 14:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:17.672 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:17.672 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:17.672 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.672 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:17.929 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.929 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:17.929 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.929 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:18.186 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.186 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:18.186 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:18.186 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.444 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.444 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:18.444 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.444 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:18.701 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.701 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:18.701 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.701 14:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:18.959 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:18.959 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:18.959 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.959 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:18.959 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.959 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:19.217 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:19.217 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:19.475 14:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:19.733 14:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:20.667 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:20.667 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:20.667 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.667 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:20.925 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.925 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:20.925 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.925 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:21.183 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.183 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:21.183 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.183 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:21.441 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.441 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:21.441 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.441 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:21.699 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.699 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:21.699 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.699 14:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:21.957 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.957 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:21.957 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.957 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:22.215 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.215 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:22.215 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:22.473 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:22.473 14:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:23.848 14:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:23.848 14:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:23.848 14:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.848 14:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:23.848 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:23.848 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:23.848 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.848 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:24.107 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.107 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:24.107 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.107 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.366 14:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:24.625 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.625 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:24.625 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.625 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:24.883 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.883 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:24.883 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:25.142 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:25.401 14:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:26.338 14:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:26.338 14:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:26.338 14:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.338 14:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:26.596 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.596 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:26.596 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.596 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:26.855 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.855 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:26.855 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.855 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:27.114 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.114 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:27.114 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.114 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:27.388 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.388 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:27.388 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.388 14:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:27.660 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.660 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:27.660 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.660 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:27.919 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.919 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:27.919 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:28.178 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:28.437 14:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:29.373 14:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:29.373 14:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:29.373 14:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.374 14:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:29.633 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.633 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:29.633 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.633 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:29.891 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:29.891 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:29.891 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.891 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:30.150 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.150 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:30.150 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.150 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:30.409 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.409 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:30.409 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.409 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:30.668 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.668 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:30.668 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.668 14:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1590163 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1590163 ']' 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1590163 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1590163 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1590163' 00:35:30.927 killing process with pid 1590163 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1590163 00:35:30.927 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1590163 00:35:30.927 Connection closed with partial response: 00:35:30.927 00:35:30.927 00:35:31.188 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1590163 00:35:31.188 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:31.188 [2024-06-10 14:03:11.860496] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:35:31.188 [2024-06-10 14:03:11.860572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590163 ] 00:35:31.188 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.188 [2024-06-10 14:03:11.955525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.188 [2024-06-10 14:03:12.026450] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.188 Running I/O for 90 seconds... 00:35:31.188 [2024-06-10 14:03:27.865676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.188 [2024-06-10 14:03:27.865718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.865923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.865932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.866404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.866415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.866430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.866440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:31.188 [2024-06-10 14:03:27.866456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.188 [2024-06-10 14:03:27.866470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.866980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.866989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.189 [2024-06-10 14:03:27.867502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:31.189 [2024-06-10 14:03:27.867666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.189 [2024-06-10 14:03:27.867676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.867981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.867991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.190 [2024-06-10 14:03:27.868757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.190 [2024-06-10 14:03:27.868766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.868784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.868794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.868947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.868960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.868983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.868993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:27.869508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:27.869519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.700039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.191 [2024-06-10 14:03:42.700080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.700131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.700142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.191 [2024-06-10 14:03:42.700168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.191 [2024-06-10 14:03:42.702672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.191 [2024-06-10 14:03:42.702697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.191 [2024-06-10 14:03:42.702721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.191 [2024-06-10 14:03:42.702906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.702984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.702998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.703008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.703022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.703032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.191 [2024-06-10 14:03:42.703056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:31.191 [2024-06-10 14:03:42.703071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:31.192 [2024-06-10 14:03:42.703317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.192 [2024-06-10 14:03:42.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:31.192 Received shutdown signal, test time was about 31.133967 seconds 00:35:31.192 00:35:31.192 Latency(us) 00:35:31.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.192 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:31.192 Verification LBA range: start 0x0 length 0x4000 00:35:31.192 Nvme0n1 : 31.13 8430.42 32.93 0.00 0.00 15164.30 255.59 4026531.84 00:35:31.192 =================================================================================================================== 00:35:31.192 Total : 8430.42 32.93 0.00 0.00 15164.30 255.59 4026531.84 00:35:31.192 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:31.452 rmmod nvme_tcp 00:35:31.452 rmmod nvme_fabrics 00:35:31.452 rmmod nvme_keyring 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1589639 ']' 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1589639 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1589639 ']' 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1589639 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1589639 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1589639' 00:35:31.452 killing process with pid 1589639 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1589639 00:35:31.452 14:03:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1589639 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.711 14:03:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.249 14:03:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:34.249 00:35:34.249 real 0m47.729s 00:35:34.249 user 2m1.587s 00:35:34.249 sys 0m17.976s 00:35:34.249 14:03:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:34.249 14:03:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:34.249 ************************************ 00:35:34.249 END TEST nvmf_host_multipath_status 00:35:34.249 ************************************ 00:35:34.249 14:03:48 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:34.249 14:03:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:34.249 14:03:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:34.249 14:03:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.249 ************************************ 00:35:34.249 START TEST nvmf_discovery_remove_ifc 00:35:34.249 ************************************ 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:34.249 * Looking for test storage... 00:35:34.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:34.249 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:34.250 14:03:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:42.377 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:42.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:42.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:42.378 Found net devices under 0000:af:00.0: cvl_0_0 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:42.378 Found net devices under 0000:af:00.1: cvl_0_1 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:42.378 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:42.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:42.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:35:42.638 00:35:42.638 --- 10.0.0.2 ping statistics --- 00:35:42.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.638 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:42.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:42.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:35:42.638 00:35:42.638 --- 10.0.0.1 ping statistics --- 00:35:42.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.638 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1600312 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1600312 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1600312 ']' 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:42.638 14:03:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.638 [2024-06-10 14:03:57.031743] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:35:42.638 [2024-06-10 14:03:57.031804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.638 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.898 [2024-06-10 14:03:57.148247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.898 [2024-06-10 14:03:57.233975] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.898 [2024-06-10 14:03:57.234020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.898 [2024-06-10 14:03:57.234034] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.898 [2024-06-10 14:03:57.234047] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.898 [2024-06-10 14:03:57.234057] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.898 [2024-06-10 14:03:57.234082] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.466 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:43.466 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:35:43.466 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:43.467 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:43.467 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.726 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.726 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:43.726 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.726 14:03:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.726 [2024-06-10 14:03:57.993921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.726 [2024-06-10 14:03:58.002084] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:43.726 null0 00:35:43.726 [2024-06-10 14:03:58.034099] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1600422 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1600422 /tmp/host.sock 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1600422 ']' 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:43.726 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:43.726 14:03:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.726 [2024-06-10 14:03:58.110264] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:35:43.726 [2024-06-10 14:03:58.110332] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600422 ] 00:35:43.726 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.986 [2024-06-10 14:03:58.230489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.986 [2024-06-10 14:03:58.315394] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.554 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.813 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.813 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:44.813 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.813 14:03:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.752 [2024-06-10 14:04:00.143723] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:45.752 [2024-06-10 14:04:00.143756] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:45.752 [2024-06-10 14:04:00.143779] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:46.012 [2024-06-10 14:04:00.231049] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:46.012 [2024-06-10 14:04:00.457503] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:46.012 [2024-06-10 14:04:00.457564] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:46.012 [2024-06-10 14:04:00.457600] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:46.012 [2024-06-10 14:04:00.457623] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:46.012 [2024-06-10 14:04:00.457653] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:46.012 [2024-06-10 14:04:00.462612] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18923a0 was disconnected and freed. delete nvme_qpair. 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.012 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:46.271 14:04:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:47.649 14:04:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:48.584 14:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:49.521 14:04:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:50.457 14:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:51.831 [2024-06-10 14:04:05.898079] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:51.831 [2024-06-10 14:04:05.898129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.831 [2024-06-10 14:04:05.898147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.831 [2024-06-10 14:04:05.898163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.831 [2024-06-10 14:04:05.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.831 [2024-06-10 14:04:05.898189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.831 [2024-06-10 14:04:05.898201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.831 [2024-06-10 14:04:05.898215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.831 [2024-06-10 14:04:05.898227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.831 [2024-06-10 14:04:05.898246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.831 [2024-06-10 14:04:05.898259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.831 [2024-06-10 14:04:05.898272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1859510 is same with the state(5) to be set 00:35:51.831 [2024-06-10 14:04:05.908098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1859510 (9): Bad file descriptor 00:35:51.831 [2024-06-10 14:04:05.918143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.831 14:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:52.764 [2024-06-10 14:04:06.925691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:52.764 [2024-06-10 14:04:06.925785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1859510 with addr=10.0.0.2, port=4420 00:35:52.764 [2024-06-10 14:04:06.925827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1859510 is same with the state(5) to be set 00:35:52.764 [2024-06-10 14:04:06.925898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1859510 (9): Bad file descriptor 00:35:52.764 [2024-06-10 14:04:06.926805] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:52.764 [2024-06-10 14:04:06.926864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:52.764 [2024-06-10 14:04:06.926895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:52.764 [2024-06-10 14:04:06.926927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:52.764 [2024-06-10 14:04:06.926974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:52.764 [2024-06-10 14:04:06.927005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:52.764 14:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.764 14:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:52.764 14:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:53.698 [2024-06-10 14:04:07.929512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:53.698 [2024-06-10 14:04:07.929554] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:53.698 [2024-06-10 14:04:07.929590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.698 [2024-06-10 14:04:07.929606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.698 [2024-06-10 14:04:07.929622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.698 [2024-06-10 14:04:07.929635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.698 [2024-06-10 14:04:07.929649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.698 [2024-06-10 14:04:07.929667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.698 [2024-06-10 14:04:07.929680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.698 [2024-06-10 14:04:07.929693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.698 [2024-06-10 14:04:07.929707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.698 [2024-06-10 14:04:07.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.698 [2024-06-10 14:04:07.929732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:53.698 [2024-06-10 14:04:07.929764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18589a0 (9): Bad file descriptor 00:35:53.698 [2024-06-10 14:04:07.930766] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:53.698 [2024-06-10 14:04:07.930782] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:53.698 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.698 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.698 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:53.699 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.699 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:53.699 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.699 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.699 14:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:53.699 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:53.957 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:53.957 14:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:54.890 14:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:55.823 [2024-06-10 14:04:09.990479] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:55.823 [2024-06-10 14:04:09.990501] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:55.823 [2024-06-10 14:04:09.990520] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:55.823 [2024-06-10 14:04:10.118947] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:55.823 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.824 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:55.824 14:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:56.080 [2024-06-10 14:04:10.341381] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:56.080 [2024-06-10 14:04:10.341427] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:56.080 [2024-06-10 14:04:10.341453] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:56.080 [2024-06-10 14:04:10.341472] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:56.080 [2024-06-10 14:04:10.341483] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:56.080 [2024-06-10 14:04:10.348507] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x189cca0 was disconnected and freed. delete nvme_qpair. 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1600422 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1600422 ']' 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1600422 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1600422 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1600422' 00:35:57.080 killing process with pid 1600422 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1600422 00:35:57.080 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1600422 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:57.345 rmmod nvme_tcp 00:35:57.345 rmmod nvme_fabrics 00:35:57.345 rmmod nvme_keyring 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1600312 ']' 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1600312 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1600312 ']' 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1600312 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1600312 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1600312' 00:35:57.345 killing process with pid 1600312 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1600312 00:35:57.345 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1600312 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:57.603 14:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.134 14:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:00.134 00:36:00.134 real 0m25.780s 00:36:00.134 user 0m29.445s 00:36:00.134 sys 0m9.103s 00:36:00.134 14:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:00.134 14:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:00.134 ************************************ 00:36:00.134 END TEST nvmf_discovery_remove_ifc 00:36:00.134 ************************************ 00:36:00.134 14:04:14 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:00.134 14:04:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:00.134 14:04:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:00.134 14:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.134 ************************************ 00:36:00.134 START TEST nvmf_identify_kernel_target 00:36:00.134 ************************************ 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:00.134 * Looking for test storage... 00:36:00.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:36:00.134 14:04:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:36:08.245 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:08.246 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:08.246 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:08.246 Found net devices under 0000:af:00.0: cvl_0_0 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:08.246 Found net devices under 0000:af:00.1: cvl_0_1 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.246 14:04:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:08.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:36:08.246 00:36:08.246 --- 10.0.0.2 ping statistics --- 00:36:08.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.246 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:36:08.246 00:36:08.246 --- 10.0.0.1 ping statistics --- 00:36:08.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.246 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # local ip 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:08.246 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 nvmf_port=4420 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:08.247 14:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:11.531 Waiting for block devices as requested 00:36:11.531 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.531 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:11.531 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:11.531 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:11.790 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:11.790 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:11.790 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.049 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.049 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.049 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.308 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.308 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.308 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.308 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.568 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.568 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.568 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:12.827 No valid GPT data, bailing 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # echo SPDK-test 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo 1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo 1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # echo tcp 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # echo 4420 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # echo ipv4 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:12.827 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:13.087 00:36:13.088 Discovery Log Number of Records 2, Generation counter 2 00:36:13.088 =====Discovery Log Entry 0====== 00:36:13.088 trtype: tcp 00:36:13.088 adrfam: ipv4 00:36:13.088 subtype: current discovery subsystem 00:36:13.088 treq: not specified, sq flow control disable supported 00:36:13.088 portid: 1 00:36:13.088 trsvcid: 4420 00:36:13.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:13.088 traddr: 10.0.0.1 00:36:13.088 eflags: none 00:36:13.088 sectype: none 00:36:13.088 =====Discovery Log Entry 1====== 00:36:13.088 trtype: tcp 00:36:13.088 adrfam: ipv4 00:36:13.088 subtype: nvme subsystem 00:36:13.088 treq: not specified, sq flow control disable supported 00:36:13.088 portid: 1 00:36:13.088 trsvcid: 4420 00:36:13.088 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:13.088 traddr: 10.0.0.1 00:36:13.088 eflags: none 00:36:13.088 sectype: none 00:36:13.088 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:13.088 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:13.088 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.088 ===================================================== 00:36:13.088 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:13.088 ===================================================== 00:36:13.088 Controller Capabilities/Features 00:36:13.088 ================================ 00:36:13.088 Vendor ID: 0000 00:36:13.088 Subsystem Vendor ID: 0000 00:36:13.088 Serial Number: 25ef054b5029eebbfc0b 00:36:13.088 Model Number: Linux 00:36:13.088 Firmware Version: 6.7.0-68 00:36:13.088 Recommended Arb Burst: 0 00:36:13.088 IEEE OUI Identifier: 00 00 00 00:36:13.088 Multi-path I/O 00:36:13.088 May have multiple subsystem ports: No 00:36:13.088 May have multiple controllers: No 00:36:13.088 Associated with SR-IOV VF: No 00:36:13.088 Max Data Transfer Size: Unlimited 00:36:13.088 Max Number of Namespaces: 0 00:36:13.088 Max Number of I/O Queues: 1024 00:36:13.088 NVMe Specification Version (VS): 1.3 00:36:13.088 NVMe Specification Version (Identify): 1.3 00:36:13.088 Maximum Queue Entries: 1024 00:36:13.088 Contiguous Queues Required: No 00:36:13.088 Arbitration Mechanisms Supported 00:36:13.088 Weighted Round Robin: Not Supported 00:36:13.088 Vendor Specific: Not Supported 00:36:13.088 Reset Timeout: 7500 ms 00:36:13.088 Doorbell Stride: 4 bytes 00:36:13.088 NVM Subsystem Reset: Not Supported 00:36:13.088 Command Sets Supported 00:36:13.088 NVM Command Set: Supported 00:36:13.088 Boot Partition: Not Supported 00:36:13.088 Memory Page Size Minimum: 4096 bytes 00:36:13.088 Memory Page Size Maximum: 4096 bytes 00:36:13.088 Persistent Memory Region: Not Supported 00:36:13.088 Optional Asynchronous Events Supported 00:36:13.088 Namespace Attribute Notices: Not Supported 00:36:13.088 Firmware Activation Notices: Not Supported 00:36:13.088 ANA Change Notices: Not Supported 00:36:13.088 PLE Aggregate Log Change Notices: Not Supported 00:36:13.088 LBA Status Info Alert Notices: Not Supported 00:36:13.088 EGE Aggregate Log Change Notices: Not Supported 00:36:13.088 Normal NVM Subsystem Shutdown event: Not Supported 00:36:13.088 Zone Descriptor Change Notices: Not Supported 00:36:13.088 Discovery Log Change Notices: Supported 00:36:13.088 Controller Attributes 00:36:13.088 128-bit Host Identifier: Not Supported 00:36:13.088 Non-Operational Permissive Mode: Not Supported 00:36:13.088 NVM Sets: Not Supported 00:36:13.088 Read Recovery Levels: Not Supported 00:36:13.088 Endurance Groups: Not Supported 00:36:13.088 Predictable Latency Mode: Not Supported 00:36:13.088 Traffic Based Keep ALive: Not Supported 00:36:13.088 Namespace Granularity: Not Supported 00:36:13.088 SQ Associations: Not Supported 00:36:13.088 UUID List: Not Supported 00:36:13.088 Multi-Domain Subsystem: Not Supported 00:36:13.088 Fixed Capacity Management: Not Supported 00:36:13.088 Variable Capacity Management: Not Supported 00:36:13.088 Delete Endurance Group: Not Supported 00:36:13.088 Delete NVM Set: Not Supported 00:36:13.088 Extended LBA Formats Supported: Not Supported 00:36:13.088 Flexible Data Placement Supported: Not Supported 00:36:13.088 00:36:13.088 Controller Memory Buffer Support 00:36:13.088 ================================ 00:36:13.088 Supported: No 00:36:13.088 00:36:13.088 Persistent Memory Region Support 00:36:13.088 ================================ 00:36:13.088 Supported: No 00:36:13.088 00:36:13.088 Admin Command Set Attributes 00:36:13.088 ============================ 00:36:13.088 Security Send/Receive: Not Supported 00:36:13.088 Format NVM: Not Supported 00:36:13.088 Firmware Activate/Download: Not Supported 00:36:13.088 Namespace Management: Not Supported 00:36:13.088 Device Self-Test: Not Supported 00:36:13.088 Directives: Not Supported 00:36:13.088 NVMe-MI: Not Supported 00:36:13.088 Virtualization Management: Not Supported 00:36:13.088 Doorbell Buffer Config: Not Supported 00:36:13.088 Get LBA Status Capability: Not Supported 00:36:13.088 Command & Feature Lockdown Capability: Not Supported 00:36:13.088 Abort Command Limit: 1 00:36:13.088 Async Event Request Limit: 1 00:36:13.088 Number of Firmware Slots: N/A 00:36:13.088 Firmware Slot 1 Read-Only: N/A 00:36:13.088 Firmware Activation Without Reset: N/A 00:36:13.088 Multiple Update Detection Support: N/A 00:36:13.088 Firmware Update Granularity: No Information Provided 00:36:13.088 Per-Namespace SMART Log: No 00:36:13.088 Asymmetric Namespace Access Log Page: Not Supported 00:36:13.088 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:13.088 Command Effects Log Page: Not Supported 00:36:13.088 Get Log Page Extended Data: Supported 00:36:13.088 Telemetry Log Pages: Not Supported 00:36:13.088 Persistent Event Log Pages: Not Supported 00:36:13.088 Supported Log Pages Log Page: May Support 00:36:13.088 Commands Supported & Effects Log Page: Not Supported 00:36:13.088 Feature Identifiers & Effects Log Page:May Support 00:36:13.088 NVMe-MI Commands & Effects Log Page: May Support 00:36:13.088 Data Area 4 for Telemetry Log: Not Supported 00:36:13.088 Error Log Page Entries Supported: 1 00:36:13.088 Keep Alive: Not Supported 00:36:13.088 00:36:13.088 NVM Command Set Attributes 00:36:13.088 ========================== 00:36:13.088 Submission Queue Entry Size 00:36:13.088 Max: 1 00:36:13.088 Min: 1 00:36:13.088 Completion Queue Entry Size 00:36:13.088 Max: 1 00:36:13.088 Min: 1 00:36:13.088 Number of Namespaces: 0 00:36:13.088 Compare Command: Not Supported 00:36:13.088 Write Uncorrectable Command: Not Supported 00:36:13.088 Dataset Management Command: Not Supported 00:36:13.088 Write Zeroes Command: Not Supported 00:36:13.088 Set Features Save Field: Not Supported 00:36:13.088 Reservations: Not Supported 00:36:13.088 Timestamp: Not Supported 00:36:13.088 Copy: Not Supported 00:36:13.088 Volatile Write Cache: Not Present 00:36:13.088 Atomic Write Unit (Normal): 1 00:36:13.088 Atomic Write Unit (PFail): 1 00:36:13.088 Atomic Compare & Write Unit: 1 00:36:13.088 Fused Compare & Write: Not Supported 00:36:13.088 Scatter-Gather List 00:36:13.088 SGL Command Set: Supported 00:36:13.088 SGL Keyed: Not Supported 00:36:13.088 SGL Bit Bucket Descriptor: Not Supported 00:36:13.088 SGL Metadata Pointer: Not Supported 00:36:13.088 Oversized SGL: Not Supported 00:36:13.088 SGL Metadata Address: Not Supported 00:36:13.088 SGL Offset: Supported 00:36:13.088 Transport SGL Data Block: Not Supported 00:36:13.088 Replay Protected Memory Block: Not Supported 00:36:13.088 00:36:13.088 Firmware Slot Information 00:36:13.088 ========================= 00:36:13.088 Active slot: 0 00:36:13.088 00:36:13.088 00:36:13.088 Error Log 00:36:13.088 ========= 00:36:13.088 00:36:13.088 Active Namespaces 00:36:13.088 ================= 00:36:13.088 Discovery Log Page 00:36:13.088 ================== 00:36:13.088 Generation Counter: 2 00:36:13.088 Number of Records: 2 00:36:13.088 Record Format: 0 00:36:13.088 00:36:13.088 Discovery Log Entry 0 00:36:13.088 ---------------------- 00:36:13.088 Transport Type: 3 (TCP) 00:36:13.088 Address Family: 1 (IPv4) 00:36:13.088 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:13.088 Entry Flags: 00:36:13.088 Duplicate Returned Information: 0 00:36:13.088 Explicit Persistent Connection Support for Discovery: 0 00:36:13.088 Transport Requirements: 00:36:13.088 Secure Channel: Not Specified 00:36:13.088 Port ID: 1 (0x0001) 00:36:13.088 Controller ID: 65535 (0xffff) 00:36:13.088 Admin Max SQ Size: 32 00:36:13.088 Transport Service Identifier: 4420 00:36:13.088 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:13.088 Transport Address: 10.0.0.1 00:36:13.088 Discovery Log Entry 1 00:36:13.089 ---------------------- 00:36:13.089 Transport Type: 3 (TCP) 00:36:13.089 Address Family: 1 (IPv4) 00:36:13.089 Subsystem Type: 2 (NVM Subsystem) 00:36:13.089 Entry Flags: 00:36:13.089 Duplicate Returned Information: 0 00:36:13.089 Explicit Persistent Connection Support for Discovery: 0 00:36:13.089 Transport Requirements: 00:36:13.089 Secure Channel: Not Specified 00:36:13.089 Port ID: 1 (0x0001) 00:36:13.089 Controller ID: 65535 (0xffff) 00:36:13.089 Admin Max SQ Size: 32 00:36:13.089 Transport Service Identifier: 4420 00:36:13.089 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:13.089 Transport Address: 10.0.0.1 00:36:13.089 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:13.089 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.089 get_feature(0x01) failed 00:36:13.089 get_feature(0x02) failed 00:36:13.089 get_feature(0x04) failed 00:36:13.089 ===================================================== 00:36:13.089 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:13.089 ===================================================== 00:36:13.089 Controller Capabilities/Features 00:36:13.089 ================================ 00:36:13.089 Vendor ID: 0000 00:36:13.089 Subsystem Vendor ID: 0000 00:36:13.089 Serial Number: 452696d7a2fda4154419 00:36:13.089 Model Number: SPDK-test 00:36:13.089 Firmware Version: 6.7.0-68 00:36:13.089 Recommended Arb Burst: 6 00:36:13.089 IEEE OUI Identifier: 00 00 00 00:36:13.089 Multi-path I/O 00:36:13.089 May have multiple subsystem ports: Yes 00:36:13.089 May have multiple controllers: Yes 00:36:13.089 Associated with SR-IOV VF: No 00:36:13.089 Max Data Transfer Size: Unlimited 00:36:13.089 Max Number of Namespaces: 1024 00:36:13.089 Max Number of I/O Queues: 128 00:36:13.089 NVMe Specification Version (VS): 1.3 00:36:13.089 NVMe Specification Version (Identify): 1.3 00:36:13.089 Maximum Queue Entries: 1024 00:36:13.089 Contiguous Queues Required: No 00:36:13.089 Arbitration Mechanisms Supported 00:36:13.089 Weighted Round Robin: Not Supported 00:36:13.089 Vendor Specific: Not Supported 00:36:13.089 Reset Timeout: 7500 ms 00:36:13.089 Doorbell Stride: 4 bytes 00:36:13.089 NVM Subsystem Reset: Not Supported 00:36:13.089 Command Sets Supported 00:36:13.089 NVM Command Set: Supported 00:36:13.089 Boot Partition: Not Supported 00:36:13.089 Memory Page Size Minimum: 4096 bytes 00:36:13.089 Memory Page Size Maximum: 4096 bytes 00:36:13.089 Persistent Memory Region: Not Supported 00:36:13.089 Optional Asynchronous Events Supported 00:36:13.089 Namespace Attribute Notices: Supported 00:36:13.089 Firmware Activation Notices: Not Supported 00:36:13.089 ANA Change Notices: Supported 00:36:13.089 PLE Aggregate Log Change Notices: Not Supported 00:36:13.089 LBA Status Info Alert Notices: Not Supported 00:36:13.089 EGE Aggregate Log Change Notices: Not Supported 00:36:13.089 Normal NVM Subsystem Shutdown event: Not Supported 00:36:13.089 Zone Descriptor Change Notices: Not Supported 00:36:13.089 Discovery Log Change Notices: Not Supported 00:36:13.089 Controller Attributes 00:36:13.089 128-bit Host Identifier: Supported 00:36:13.089 Non-Operational Permissive Mode: Not Supported 00:36:13.089 NVM Sets: Not Supported 00:36:13.089 Read Recovery Levels: Not Supported 00:36:13.089 Endurance Groups: Not Supported 00:36:13.089 Predictable Latency Mode: Not Supported 00:36:13.089 Traffic Based Keep ALive: Supported 00:36:13.089 Namespace Granularity: Not Supported 00:36:13.089 SQ Associations: Not Supported 00:36:13.089 UUID List: Not Supported 00:36:13.089 Multi-Domain Subsystem: Not Supported 00:36:13.089 Fixed Capacity Management: Not Supported 00:36:13.089 Variable Capacity Management: Not Supported 00:36:13.089 Delete Endurance Group: Not Supported 00:36:13.089 Delete NVM Set: Not Supported 00:36:13.089 Extended LBA Formats Supported: Not Supported 00:36:13.089 Flexible Data Placement Supported: Not Supported 00:36:13.089 00:36:13.089 Controller Memory Buffer Support 00:36:13.089 ================================ 00:36:13.089 Supported: No 00:36:13.089 00:36:13.089 Persistent Memory Region Support 00:36:13.089 ================================ 00:36:13.089 Supported: No 00:36:13.089 00:36:13.089 Admin Command Set Attributes 00:36:13.089 ============================ 00:36:13.089 Security Send/Receive: Not Supported 00:36:13.089 Format NVM: Not Supported 00:36:13.089 Firmware Activate/Download: Not Supported 00:36:13.089 Namespace Management: Not Supported 00:36:13.089 Device Self-Test: Not Supported 00:36:13.089 Directives: Not Supported 00:36:13.089 NVMe-MI: Not Supported 00:36:13.089 Virtualization Management: Not Supported 00:36:13.089 Doorbell Buffer Config: Not Supported 00:36:13.089 Get LBA Status Capability: Not Supported 00:36:13.089 Command & Feature Lockdown Capability: Not Supported 00:36:13.089 Abort Command Limit: 4 00:36:13.089 Async Event Request Limit: 4 00:36:13.089 Number of Firmware Slots: N/A 00:36:13.089 Firmware Slot 1 Read-Only: N/A 00:36:13.089 Firmware Activation Without Reset: N/A 00:36:13.089 Multiple Update Detection Support: N/A 00:36:13.089 Firmware Update Granularity: No Information Provided 00:36:13.089 Per-Namespace SMART Log: Yes 00:36:13.089 Asymmetric Namespace Access Log Page: Supported 00:36:13.089 ANA Transition Time : 10 sec 00:36:13.089 00:36:13.089 Asymmetric Namespace Access Capabilities 00:36:13.089 ANA Optimized State : Supported 00:36:13.089 ANA Non-Optimized State : Supported 00:36:13.089 ANA Inaccessible State : Supported 00:36:13.089 ANA Persistent Loss State : Supported 00:36:13.089 ANA Change State : Supported 00:36:13.089 ANAGRPID is not changed : No 00:36:13.089 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:13.089 00:36:13.089 ANA Group Identifier Maximum : 128 00:36:13.089 Number of ANA Group Identifiers : 128 00:36:13.089 Max Number of Allowed Namespaces : 1024 00:36:13.089 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:13.089 Command Effects Log Page: Supported 00:36:13.089 Get Log Page Extended Data: Supported 00:36:13.089 Telemetry Log Pages: Not Supported 00:36:13.089 Persistent Event Log Pages: Not Supported 00:36:13.089 Supported Log Pages Log Page: May Support 00:36:13.089 Commands Supported & Effects Log Page: Not Supported 00:36:13.089 Feature Identifiers & Effects Log Page:May Support 00:36:13.089 NVMe-MI Commands & Effects Log Page: May Support 00:36:13.089 Data Area 4 for Telemetry Log: Not Supported 00:36:13.089 Error Log Page Entries Supported: 128 00:36:13.089 Keep Alive: Supported 00:36:13.089 Keep Alive Granularity: 1000 ms 00:36:13.089 00:36:13.089 NVM Command Set Attributes 00:36:13.089 ========================== 00:36:13.089 Submission Queue Entry Size 00:36:13.089 Max: 64 00:36:13.089 Min: 64 00:36:13.089 Completion Queue Entry Size 00:36:13.089 Max: 16 00:36:13.089 Min: 16 00:36:13.089 Number of Namespaces: 1024 00:36:13.089 Compare Command: Not Supported 00:36:13.089 Write Uncorrectable Command: Not Supported 00:36:13.089 Dataset Management Command: Supported 00:36:13.089 Write Zeroes Command: Supported 00:36:13.089 Set Features Save Field: Not Supported 00:36:13.089 Reservations: Not Supported 00:36:13.089 Timestamp: Not Supported 00:36:13.089 Copy: Not Supported 00:36:13.089 Volatile Write Cache: Present 00:36:13.089 Atomic Write Unit (Normal): 1 00:36:13.089 Atomic Write Unit (PFail): 1 00:36:13.089 Atomic Compare & Write Unit: 1 00:36:13.089 Fused Compare & Write: Not Supported 00:36:13.089 Scatter-Gather List 00:36:13.089 SGL Command Set: Supported 00:36:13.089 SGL Keyed: Not Supported 00:36:13.089 SGL Bit Bucket Descriptor: Not Supported 00:36:13.089 SGL Metadata Pointer: Not Supported 00:36:13.089 Oversized SGL: Not Supported 00:36:13.089 SGL Metadata Address: Not Supported 00:36:13.089 SGL Offset: Supported 00:36:13.089 Transport SGL Data Block: Not Supported 00:36:13.089 Replay Protected Memory Block: Not Supported 00:36:13.089 00:36:13.089 Firmware Slot Information 00:36:13.089 ========================= 00:36:13.089 Active slot: 0 00:36:13.089 00:36:13.089 Asymmetric Namespace Access 00:36:13.089 =========================== 00:36:13.089 Change Count : 0 00:36:13.089 Number of ANA Group Descriptors : 1 00:36:13.089 ANA Group Descriptor : 0 00:36:13.089 ANA Group ID : 1 00:36:13.089 Number of NSID Values : 1 00:36:13.089 Change Count : 0 00:36:13.089 ANA State : 1 00:36:13.089 Namespace Identifier : 1 00:36:13.089 00:36:13.089 Commands Supported and Effects 00:36:13.089 ============================== 00:36:13.089 Admin Commands 00:36:13.089 -------------- 00:36:13.089 Get Log Page (02h): Supported 00:36:13.089 Identify (06h): Supported 00:36:13.089 Abort (08h): Supported 00:36:13.089 Set Features (09h): Supported 00:36:13.089 Get Features (0Ah): Supported 00:36:13.090 Asynchronous Event Request (0Ch): Supported 00:36:13.090 Keep Alive (18h): Supported 00:36:13.090 I/O Commands 00:36:13.090 ------------ 00:36:13.090 Flush (00h): Supported 00:36:13.090 Write (01h): Supported LBA-Change 00:36:13.090 Read (02h): Supported 00:36:13.090 Write Zeroes (08h): Supported LBA-Change 00:36:13.090 Dataset Management (09h): Supported 00:36:13.090 00:36:13.090 Error Log 00:36:13.090 ========= 00:36:13.090 Entry: 0 00:36:13.090 Error Count: 0x3 00:36:13.090 Submission Queue Id: 0x0 00:36:13.090 Command Id: 0x5 00:36:13.090 Phase Bit: 0 00:36:13.090 Status Code: 0x2 00:36:13.090 Status Code Type: 0x0 00:36:13.090 Do Not Retry: 1 00:36:13.090 Error Location: 0x28 00:36:13.090 LBA: 0x0 00:36:13.090 Namespace: 0x0 00:36:13.090 Vendor Log Page: 0x0 00:36:13.090 ----------- 00:36:13.090 Entry: 1 00:36:13.090 Error Count: 0x2 00:36:13.090 Submission Queue Id: 0x0 00:36:13.090 Command Id: 0x5 00:36:13.090 Phase Bit: 0 00:36:13.090 Status Code: 0x2 00:36:13.090 Status Code Type: 0x0 00:36:13.090 Do Not Retry: 1 00:36:13.090 Error Location: 0x28 00:36:13.090 LBA: 0x0 00:36:13.090 Namespace: 0x0 00:36:13.090 Vendor Log Page: 0x0 00:36:13.090 ----------- 00:36:13.090 Entry: 2 00:36:13.090 Error Count: 0x1 00:36:13.090 Submission Queue Id: 0x0 00:36:13.090 Command Id: 0x4 00:36:13.090 Phase Bit: 0 00:36:13.090 Status Code: 0x2 00:36:13.090 Status Code Type: 0x0 00:36:13.090 Do Not Retry: 1 00:36:13.090 Error Location: 0x28 00:36:13.090 LBA: 0x0 00:36:13.090 Namespace: 0x0 00:36:13.090 Vendor Log Page: 0x0 00:36:13.090 00:36:13.090 Number of Queues 00:36:13.090 ================ 00:36:13.090 Number of I/O Submission Queues: 128 00:36:13.090 Number of I/O Completion Queues: 128 00:36:13.090 00:36:13.090 ZNS Specific Controller Data 00:36:13.090 ============================ 00:36:13.090 Zone Append Size Limit: 0 00:36:13.090 00:36:13.090 00:36:13.090 Active Namespaces 00:36:13.090 ================= 00:36:13.090 get_feature(0x05) failed 00:36:13.090 Namespace ID:1 00:36:13.090 Command Set Identifier: NVM (00h) 00:36:13.090 Deallocate: Supported 00:36:13.090 Deallocated/Unwritten Error: Not Supported 00:36:13.090 Deallocated Read Value: Unknown 00:36:13.090 Deallocate in Write Zeroes: Not Supported 00:36:13.090 Deallocated Guard Field: 0xFFFF 00:36:13.090 Flush: Supported 00:36:13.090 Reservation: Not Supported 00:36:13.090 Namespace Sharing Capabilities: Multiple Controllers 00:36:13.090 Size (in LBAs): 3125627568 (1490GiB) 00:36:13.090 Capacity (in LBAs): 3125627568 (1490GiB) 00:36:13.090 Utilization (in LBAs): 3125627568 (1490GiB) 00:36:13.090 UUID: c719e61b-3a8c-4ae8-a37a-c3c17d7f3743 00:36:13.090 Thin Provisioning: Not Supported 00:36:13.090 Per-NS Atomic Units: Yes 00:36:13.090 Atomic Boundary Size (Normal): 0 00:36:13.090 Atomic Boundary Size (PFail): 0 00:36:13.090 Atomic Boundary Offset: 0 00:36:13.090 NGUID/EUI64 Never Reused: No 00:36:13.090 ANA group ID: 1 00:36:13.090 Namespace Write Protected: No 00:36:13.090 Number of LBA Formats: 1 00:36:13.090 Current LBA Format: LBA Format #00 00:36:13.090 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:13.090 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:13.090 rmmod nvme_tcp 00:36:13.090 rmmod nvme_fabrics 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:13.090 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.349 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:13.349 14:04:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.251 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:15.251 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:15.251 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:15.251 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 0 00:36:15.251 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:15.251 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:15.252 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:15.252 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:15.252 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:36:15.252 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:36:15.252 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:36:15.252 14:04:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:19.445 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:19.445 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:20.825 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:36:20.825 00:36:20.825 real 0m21.079s 00:36:20.825 user 0m4.718s 00:36:20.825 sys 0m11.785s 00:36:20.825 14:04:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:20.825 14:04:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:20.825 ************************************ 00:36:20.825 END TEST nvmf_identify_kernel_target 00:36:20.825 ************************************ 00:36:20.825 14:04:35 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:20.825 14:04:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:20.825 14:04:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:20.825 14:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.825 ************************************ 00:36:20.825 START TEST nvmf_auth_host 00:36:20.825 ************************************ 00:36:20.825 14:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:21.085 * Looking for test storage... 00:36:21.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:21.085 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:36:21.086 14:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:31.184 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:31.184 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:31.184 Found net devices under 0000:af:00.0: cvl_0_0 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:31.184 Found net devices under 0000:af:00.1: cvl_0_1 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:31.184 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:31.185 14:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:31.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:36:31.185 00:36:31.185 --- 10.0.0.2 ping statistics --- 00:36:31.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.185 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:31.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:36:31.185 00:36:31.185 --- 10.0.0.1 ping statistics --- 00:36:31.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.185 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1615018 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1615018 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1615018 ']' 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:31.185 14:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=null 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=0f68f539fec5b90ea62cf9facd81f147 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.ZJI 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 0f68f539fec5b90ea62cf9facd81f147 0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 0f68f539fec5b90ea62cf9facd81f147 0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=0f68f539fec5b90ea62cf9facd81f147 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.ZJI 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.ZJI 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZJI 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha512 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=64 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=274e63da3b3b7d2818408d4ae2ee8639bff220f2a3557819031a5c14b2cf2a7e 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.JIc 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 274e63da3b3b7d2818408d4ae2ee8639bff220f2a3557819031a5c14b2cf2a7e 3 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 274e63da3b3b7d2818408d4ae2ee8639bff220f2a3557819031a5c14b2cf2a7e 3 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=274e63da3b3b7d2818408d4ae2ee8639bff220f2a3557819031a5c14b2cf2a7e 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=3 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.JIc 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.JIc 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.JIc 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=null 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=48 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=44ea95d607cc59e8437b3d555fe9099442e6ba95728dad16 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.SaJ 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 44ea95d607cc59e8437b3d555fe9099442e6ba95728dad16 0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 44ea95d607cc59e8437b3d555fe9099442e6ba95728dad16 0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=44ea95d607cc59e8437b3d555fe9099442e6ba95728dad16 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=0 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.SaJ 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.SaJ 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SaJ 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha384 00:36:31.185 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=48 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=63b3031235220c10155e633f8059c2b1cdfd5d348abe348c 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.k96 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 63b3031235220c10155e633f8059c2b1cdfd5d348abe348c 2 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 63b3031235220c10155e633f8059c2b1cdfd5d348abe348c 2 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=63b3031235220c10155e633f8059c2b1cdfd5d348abe348c 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=2 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.k96 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.k96 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.k96 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha256 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=4df5c61fa5015a2b4a4635147c1b2c90 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.Ec5 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 4df5c61fa5015a2b4a4635147c1b2c90 1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 4df5c61fa5015a2b4a4635147c1b2c90 1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=4df5c61fa5015a2b4a4635147c1b2c90 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.Ec5 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.Ec5 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ec5 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha256 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=a8591477807b2dc80d09cafeba87829b 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.b4C 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key a8591477807b2dc80d09cafeba87829b 1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 a8591477807b2dc80d09cafeba87829b 1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=a8591477807b2dc80d09cafeba87829b 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=1 00:36:31.186 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.b4C 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.b4C 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.b4C 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha384 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=48 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=9959021ae11abdd2e37eb558fe606b52ee79760b2ff9ea88 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.rZ5 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 9959021ae11abdd2e37eb558fe606b52ee79760b2ff9ea88 2 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 9959021ae11abdd2e37eb558fe606b52ee79760b2ff9ea88 2 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=9959021ae11abdd2e37eb558fe606b52ee79760b2ff9ea88 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=2 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.rZ5 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.rZ5 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rZ5 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=null 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=c0c4479955a8ea46fb7f23d9c37d8baf 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.5iV 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key c0c4479955a8ea46fb7f23d9c37d8baf 0 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 c0c4479955a8ea46fb7f23d9c37d8baf 0 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=c0c4479955a8ea46fb7f23d9c37d8baf 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=0 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.445 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.5iV 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.5iV 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5iV 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha512 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=64 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=e57c04ed1fb4804bccdfb324905f2d71d9cbe2961bebf6c186e4adfe772b33ce 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.LVp 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key e57c04ed1fb4804bccdfb324905f2d71d9cbe2961bebf6c186e4adfe772b33ce 3 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 e57c04ed1fb4804bccdfb324905f2d71d9cbe2961bebf6c186e4adfe772b33ce 3 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=e57c04ed1fb4804bccdfb324905f2d71d9cbe2961bebf6c186e4adfe772b33ce 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=3 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.LVp 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.LVp 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.LVp 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1615018 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1615018 ']' 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:31.446 14:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZJI 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.JIc ]] 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JIc 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SaJ 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.k96 ]] 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k96 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ec5 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.705 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.964 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.964 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.b4C ]] 00:36:31.964 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.b4C 00:36:31.964 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.964 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.964 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rZ5 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5iV ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5iV 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.LVp 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 nvmf_port=4420 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:31.965 14:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:36.157 Waiting for block devices as requested 00:36:36.157 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:36.157 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:36.416 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:36.416 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:36.416 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:36.675 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:36.675 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:36.675 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:36.934 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:36.934 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:36.934 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:37.897 No valid GPT data, bailing 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@663 -- # echo SPDK-test 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo 1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo 1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # echo tcp 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@678 -- # echo 4420 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@679 -- # echo ipv4 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:37.897 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:37.897 00:36:37.897 Discovery Log Number of Records 2, Generation counter 2 00:36:37.898 =====Discovery Log Entry 0====== 00:36:37.898 trtype: tcp 00:36:37.898 adrfam: ipv4 00:36:37.898 subtype: current discovery subsystem 00:36:37.898 treq: not specified, sq flow control disable supported 00:36:37.898 portid: 1 00:36:37.898 trsvcid: 4420 00:36:37.898 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:37.898 traddr: 10.0.0.1 00:36:37.898 eflags: none 00:36:37.898 sectype: none 00:36:37.898 =====Discovery Log Entry 1====== 00:36:37.898 trtype: tcp 00:36:37.898 adrfam: ipv4 00:36:37.898 subtype: nvme subsystem 00:36:37.898 treq: not specified, sq flow control disable supported 00:36:37.898 portid: 1 00:36:37.898 trsvcid: 4420 00:36:37.898 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:37.898 traddr: 10.0.0.1 00:36:37.898 eflags: none 00:36:37.898 sectype: none 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.898 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.206 nvme0n1 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:38.206 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:38.207 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.207 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.207 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 nvme0n1 00:36:38.465 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.465 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.465 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.465 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.465 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.466 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.725 nvme0n1 00:36:38.725 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.725 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.725 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.725 14:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.725 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.725 14:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.725 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.985 nvme0n1 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.985 nvme0n1 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.985 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.244 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.244 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.244 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.244 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.245 nvme0n1 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.245 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.504 nvme0n1 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.504 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.505 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.505 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.505 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.505 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.505 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.764 14:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.764 nvme0n1 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.764 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.024 nvme0n1 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.024 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.284 nvme0n1 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.284 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.543 nvme0n1 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.543 14:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.543 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.803 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.062 nvme0n1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.062 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.321 nvme0n1 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.321 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.580 nvme0n1 00:36:41.580 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.580 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.580 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.580 14:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.580 14:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.580 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.580 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.580 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.580 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.580 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.840 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.099 nvme0n1 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:42.099 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.100 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.359 nvme0n1 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.359 14:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.927 nvme0n1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.927 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.495 nvme0n1 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:43.495 14:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.754 nvme0n1 00:36:43.754 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:43.754 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.754 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.754 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:43.754 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.013 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.272 nvme0n1 00:36:44.272 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.272 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.272 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.272 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.272 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.272 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:44.531 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.532 14:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.790 nvme0n1 00:36:44.790 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:44.790 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.790 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.790 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:44.790 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.791 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:45.049 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.050 14:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.618 nvme0n1 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.618 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:45.877 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.445 nvme0n1 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:46.445 14:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.382 nvme0n1 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:47.382 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:47.383 14:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.951 nvme0n1 00:36:47.951 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:47.951 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.951 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.951 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:47.951 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.951 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:48.208 14:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.774 nvme0n1 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.774 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:49.033 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.034 nvme0n1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.034 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.293 nvme0n1 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.293 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.294 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.553 nvme0n1 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.553 14:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.812 nvme0n1 00:36:49.812 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.812 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:49.813 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.072 nvme0n1 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.072 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.331 nvme0n1 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:50.331 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.332 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.591 nvme0n1 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.591 14:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.851 nvme0n1 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.851 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.110 nvme0n1 00:36:51.110 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.110 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.110 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.110 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.110 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.111 nvme0n1 00:36:51.111 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.370 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.629 nvme0n1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.629 14:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.887 nvme0n1 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.887 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:51.888 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.147 nvme0n1 00:36:52.147 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.147 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.147 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.147 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.147 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.147 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.406 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.665 nvme0n1 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.665 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.666 14:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.925 nvme0n1 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.925 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.493 nvme0n1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:53.493 14:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.062 nvme0n1 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.062 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.321 nvme0n1 00:36:54.321 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.321 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.321 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.321 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.321 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.321 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.580 14:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.839 nvme0n1 00:36:54.839 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:54.839 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.839 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.839 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:54.839 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.839 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.098 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.357 nvme0n1 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.357 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.616 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.616 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:55.616 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:55.617 14:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.185 nvme0n1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:56.185 14:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.123 nvme0n1 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:57.123 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.124 14:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.689 nvme0n1 00:36:57.689 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.690 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.690 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.690 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.690 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.690 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.948 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.515 nvme0n1 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:58.515 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:58.774 14:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:58.774 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.417 nvme0n1 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:59.417 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.418 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.677 nvme0n1 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.677 14:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.677 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.936 nvme0n1 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.936 nvme0n1 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:59.936 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.195 nvme0n1 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.195 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.454 nvme0n1 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.454 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.455 14:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.715 nvme0n1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.715 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.974 nvme0n1 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.974 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.233 nvme0n1 00:37:01.233 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.233 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.234 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.493 nvme0n1 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.493 14:05:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.752 nvme0n1 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:01.752 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.011 nvme0n1 00:37:02.011 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.011 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.011 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.011 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.011 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.011 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.269 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.527 nvme0n1 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:02.527 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.528 14:05:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.786 nvme0n1 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.787 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 nvme0n1 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:03.045 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.304 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.564 nvme0n1 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:03.564 14:05:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.132 nvme0n1 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.132 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.133 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.391 nvme0n1 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.391 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.650 14:05:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.909 nvme0n1 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.909 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.168 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.427 nvme0n1 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:05.427 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.428 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.687 14:05:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.956 nvme0n1 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGY2OGY1MzlmZWM1YjkwZWE2MmNmOWZhY2Q4MWYxNDd+5Eg5: 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc0ZTYzZGEzYjNiN2QyODE4NDA4ZDRhZTJlZTg2MzliZmYyMjBmMmEzNTU3ODE5MDMxYTVjMTRiMmNmMmE3ZT+bBfM=: 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:05.956 14:05:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:05.957 14:05:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:05.957 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.957 14:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.894 nvme0n1 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:06.894 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:06.895 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.830 nvme0n1 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:07.830 14:05:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRmNWM2MWZhNTAxNWEyYjRhNDYzNTE0N2MxYjJjOTDnht32: 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: ]] 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTg1OTE0Nzc4MDdiMmRjODBkMDljYWZlYmE4NzgyOWLT5BqS: 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:07.830 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.396 nvme0n1 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:08.396 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk1OTAyMWFlMTFhYmRkMmUzN2ViNTU4ZmU2MDZiNTJlZTc5NzYwYjJmZjllYTg4AxZBrQ==: 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: ]] 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBjNDQ3OTk1NWE4ZWE0NmZiN2YyM2Q5YzM3ZDhiYWbBg2nF: 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:08.397 14:05:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.331 nvme0n1 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU3YzA0ZWQxZmI0ODA0YmNjZGZiMzI0OTA1ZjJkNzFkOWNiZTI5NjFiZWJmNmMxODZlNGFkZmU3NzJiMzNjZVg4j2o=: 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.331 14:05:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.899 nvme0n1 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDRlYTk1ZDYwN2NjNTllODQzN2IzZDU1NWZlOTA5OTQ0MmU2YmE5NTcyOGRhZDE2RIb0lw==: 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNiMzAzMTIzNTIyMGMxMDE1NWU2MzNmODA1OWMyYjFjZGZkNWQzNDhhYmUzNDhjOITnXw==: 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.899 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.157 request: 00:37:10.157 { 00:37:10.157 "name": "nvme0", 00:37:10.157 "trtype": "tcp", 00:37:10.157 "traddr": "10.0.0.1", 00:37:10.157 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:10.157 "adrfam": "ipv4", 00:37:10.157 "trsvcid": "4420", 00:37:10.157 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:10.157 "method": "bdev_nvme_attach_controller", 00:37:10.157 "req_id": 1 00:37:10.157 } 00:37:10.157 Got JSON-RPC error response 00:37:10.157 response: 00:37:10.157 { 00:37:10.157 "code": -5, 00:37:10.157 "message": "Input/output error" 00:37:10.157 } 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:10.157 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.157 request: 00:37:10.157 { 00:37:10.157 "name": "nvme0", 00:37:10.157 "trtype": "tcp", 00:37:10.157 "traddr": "10.0.0.1", 00:37:10.157 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:10.157 "adrfam": "ipv4", 00:37:10.157 "trsvcid": "4420", 00:37:10.158 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:10.158 "dhchap_key": "key2", 00:37:10.158 "method": "bdev_nvme_attach_controller", 00:37:10.158 "req_id": 1 00:37:10.158 } 00:37:10.158 Got JSON-RPC error response 00:37:10.158 response: 00:37:10.158 { 00:37:10.158 "code": -5, 00:37:10.158 "message": "Input/output error" 00:37:10.158 } 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:10.158 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.416 request: 00:37:10.416 { 00:37:10.416 "name": "nvme0", 00:37:10.416 "trtype": "tcp", 00:37:10.416 "traddr": "10.0.0.1", 00:37:10.416 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:10.416 "adrfam": "ipv4", 00:37:10.416 "trsvcid": "4420", 00:37:10.416 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:10.416 "dhchap_key": "key1", 00:37:10.416 "dhchap_ctrlr_key": "ckey2", 00:37:10.416 "method": "bdev_nvme_attach_controller", 00:37:10.416 "req_id": 1 00:37:10.416 } 00:37:10.416 Got JSON-RPC error response 00:37:10.416 response: 00:37:10.416 { 00:37:10.416 "code": -5, 00:37:10.416 "message": "Input/output error" 00:37:10.416 } 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:10.416 rmmod nvme_tcp 00:37:10.416 rmmod nvme_fabrics 00:37:10.416 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1615018 ']' 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1615018 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1615018 ']' 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1615018 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1615018 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1615018' 00:37:10.417 killing process with pid 1615018 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1615018 00:37:10.417 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1615018 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:10.675 14:05:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 0 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:37:13.210 14:05:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:17.401 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:17.401 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:18.777 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:37:18.777 14:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZJI /tmp/spdk.key-null.SaJ /tmp/spdk.key-sha256.Ec5 /tmp/spdk.key-sha384.rZ5 /tmp/spdk.key-sha512.LVp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:18.777 14:05:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:22.969 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:22.969 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:22.969 00:37:22.969 real 1m1.633s 00:37:22.969 user 0m51.806s 00:37:22.969 sys 0m18.381s 00:37:22.969 14:05:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:22.969 14:05:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.969 ************************************ 00:37:22.969 END TEST nvmf_auth_host 00:37:22.969 ************************************ 00:37:22.969 14:05:36 nvmf_tcp -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:37:22.969 14:05:36 nvmf_tcp -- nvmf/nvmf.sh@109 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:22.969 14:05:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:22.969 14:05:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:22.969 14:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:22.969 ************************************ 00:37:22.969 START TEST nvmf_digest 00:37:22.969 ************************************ 00:37:22.969 14:05:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:22.969 * Looking for test storage... 00:37:22.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.969 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:37:22.970 14:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:31.089 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:31.090 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:31.090 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:31.090 Found net devices under 0000:af:00.0: cvl_0_0 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:31.090 Found net devices under 0000:af:00.1: cvl_0_1 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.090 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:31.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:37:31.349 00:37:31.349 --- 10.0.0.2 ping statistics --- 00:37:31.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.349 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:37:31.349 00:37:31.349 --- 10.0.0.1 ping statistics --- 00:37:31.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.349 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.349 ************************************ 00:37:31.349 START TEST nvmf_digest_clean 00:37:31.349 ************************************ 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1630962 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1630962 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1630962 ']' 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:31.349 14:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:31.349 [2024-06-10 14:05:45.701162] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:31.349 [2024-06-10 14:05:45.701235] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.349 EAL: No free 2048 kB hugepages reported on node 1 00:37:31.680 [2024-06-10 14:05:45.829314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.680 [2024-06-10 14:05:45.913564] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:31.681 [2024-06-10 14:05:45.913614] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:31.681 [2024-06-10 14:05:45.913628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:31.681 [2024-06-10 14:05:45.913640] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:31.681 [2024-06-10 14:05:45.913650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:31.681 [2024-06-10 14:05:45.913675] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:32.256 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:32.256 null0 00:37:32.256 [2024-06-10 14:05:46.698968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.256 [2024-06-10 14:05:46.723190] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1631249 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1631249 /var/tmp/bperf.sock 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1631249 ']' 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:32.515 14:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:32.515 [2024-06-10 14:05:46.777549] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:32.515 [2024-06-10 14:05:46.777622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631249 ] 00:37:32.515 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.515 [2024-06-10 14:05:46.888560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.515 [2024-06-10 14:05:46.978718] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.451 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:33.451 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:33.451 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:33.451 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:33.451 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:33.710 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.710 14:05:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.969 nvme0n1 00:37:33.969 14:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:33.969 14:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.228 Running I/O for 2 seconds... 00:37:36.132 00:37:36.132 Latency(us) 00:37:36.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:36.132 nvme0n1 : 2.00 19982.10 78.06 0.00 0.00 6398.34 3185.05 20027.80 00:37:36.132 =================================================================================================================== 00:37:36.132 Total : 19982.10 78.06 0.00 0.00 6398.34 3185.05 20027.80 00:37:36.132 0 00:37:36.132 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:36.132 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:36.132 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:36.132 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:36.132 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:36.132 | select(.opcode=="crc32c") 00:37:36.132 | "\(.module_name) \(.executed)"' 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1631249 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1631249 ']' 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1631249 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1631249 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1631249' 00:37:36.392 killing process with pid 1631249 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1631249 00:37:36.392 Received shutdown signal, test time was about 2.000000 seconds 00:37:36.392 00:37:36.392 Latency(us) 00:37:36.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.392 =================================================================================================================== 00:37:36.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.392 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1631249 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1631851 00:37:36.651 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1631851 /var/tmp/bperf.sock 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1631851 ']' 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:36.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:36.652 14:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:36.652 [2024-06-10 14:05:51.031429] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:36.652 [2024-06-10 14:05:51.031492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631851 ] 00:37:36.652 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:36.652 Zero copy mechanism will not be used. 00:37:36.652 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.911 [2024-06-10 14:05:51.142418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.911 [2024-06-10 14:05:51.218683] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.478 14:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:37.478 14:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:37.478 14:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:37.478 14:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:37.478 14:05:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:38.046 14:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:38.046 14:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:38.304 nvme0n1 00:37:38.304 14:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:38.304 14:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:38.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:38.304 Zero copy mechanism will not be used. 00:37:38.304 Running I/O for 2 seconds... 00:37:40.204 00:37:40.204 Latency(us) 00:37:40.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.204 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:40.204 nvme0n1 : 2.00 3318.19 414.77 0.00 0.00 4817.29 1120.67 9542.04 00:37:40.204 =================================================================================================================== 00:37:40.204 Total : 3318.19 414.77 0.00 0.00 4817.29 1120.67 9542.04 00:37:40.204 0 00:37:40.204 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:40.204 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:40.463 | select(.opcode=="crc32c") 00:37:40.463 | "\(.module_name) \(.executed)"' 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1631851 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1631851 ']' 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1631851 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:40.463 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1631851 00:37:40.722 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:40.722 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:40.722 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1631851' 00:37:40.722 killing process with pid 1631851 00:37:40.722 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1631851 00:37:40.723 Received shutdown signal, test time was about 2.000000 seconds 00:37:40.723 00:37:40.723 Latency(us) 00:37:40.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.723 =================================================================================================================== 00:37:40.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:40.723 14:05:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1631851 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1632606 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1632606 /var/tmp/bperf.sock 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1632606 ']' 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:40.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:40.723 14:05:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:40.982 [2024-06-10 14:05:55.206321] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:40.982 [2024-06-10 14:05:55.206385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632606 ] 00:37:40.982 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.982 [2024-06-10 14:05:55.316384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.982 [2024-06-10 14:05:55.402586] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.918 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:41.918 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:41.918 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:41.918 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:41.918 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:42.177 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:42.177 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:42.436 nvme0n1 00:37:42.436 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:42.436 14:05:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:42.695 Running I/O for 2 seconds... 00:37:44.599 00:37:44.599 Latency(us) 00:37:44.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.599 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.599 nvme0n1 : 2.01 19996.39 78.11 0.00 0.00 6387.42 4744.81 17720.93 00:37:44.599 =================================================================================================================== 00:37:44.599 Total : 19996.39 78.11 0.00 0.00 6387.42 4744.81 17720.93 00:37:44.599 0 00:37:44.599 14:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:44.599 14:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:44.599 14:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:44.599 14:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:44.599 | select(.opcode=="crc32c") 00:37:44.599 | "\(.module_name) \(.executed)"' 00:37:44.599 14:05:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1632606 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1632606 ']' 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1632606 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1632606 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1632606' 00:37:44.859 killing process with pid 1632606 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1632606 00:37:44.859 Received shutdown signal, test time was about 2.000000 seconds 00:37:44.859 00:37:44.859 Latency(us) 00:37:44.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.859 =================================================================================================================== 00:37:44.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.859 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1632606 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1633337 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1633337 /var/tmp/bperf.sock 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1633337 ']' 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:45.119 14:05:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:45.119 [2024-06-10 14:05:59.536381] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:45.119 [2024-06-10 14:05:59.536447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633337 ] 00:37:45.119 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:45.119 Zero copy mechanism will not be used. 00:37:45.119 EAL: No free 2048 kB hugepages reported on node 1 00:37:45.379 [2024-06-10 14:05:59.647732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.379 [2024-06-10 14:05:59.730773] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:46.327 14:06:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:46.891 nvme0n1 00:37:46.891 14:06:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:46.891 14:06:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:46.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:46.891 Zero copy mechanism will not be used. 00:37:46.891 Running I/O for 2 seconds... 00:37:48.795 00:37:48.795 Latency(us) 00:37:48.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.795 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:48.795 nvme0n1 : 2.00 4556.73 569.59 0.00 0.00 3505.47 2477.26 11953.77 00:37:48.795 =================================================================================================================== 00:37:48.795 Total : 4556.73 569.59 0.00 0.00 3505.47 2477.26 11953.77 00:37:48.795 0 00:37:48.795 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:48.795 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:48.795 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:48.795 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:48.795 | select(.opcode=="crc32c") 00:37:48.795 | "\(.module_name) \(.executed)"' 00:37:48.795 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1633337 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1633337 ']' 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1633337 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1633337 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1633337' 00:37:49.054 killing process with pid 1633337 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1633337 00:37:49.054 Received shutdown signal, test time was about 2.000000 seconds 00:37:49.054 00:37:49.054 Latency(us) 00:37:49.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.054 =================================================================================================================== 00:37:49.054 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:49.054 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1633337 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1630962 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1630962 ']' 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1630962 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1630962 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1630962' 00:37:49.314 killing process with pid 1630962 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1630962 00:37:49.314 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1630962 00:37:49.573 00:37:49.573 real 0m18.295s 00:37:49.573 user 0m35.500s 00:37:49.573 sys 0m5.182s 00:37:49.573 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:49.573 14:06:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:49.573 ************************************ 00:37:49.573 END TEST nvmf_digest_clean 00:37:49.573 ************************************ 00:37:49.573 14:06:03 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:49.573 14:06:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:49.573 14:06:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:49.573 14:06:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:49.573 ************************************ 00:37:49.573 START TEST nvmf_digest_error 00:37:49.573 ************************************ 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1634162 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1634162 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1634162 ']' 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:49.573 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:49.830 [2024-06-10 14:06:04.082233] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:49.830 [2024-06-10 14:06:04.082293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.830 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.830 [2024-06-10 14:06:04.209855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.830 [2024-06-10 14:06:04.294329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.830 [2024-06-10 14:06:04.294373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.830 [2024-06-10 14:06:04.294386] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:49.830 [2024-06-10 14:06:04.294398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:49.830 [2024-06-10 14:06:04.294408] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.830 [2024-06-10 14:06:04.294434] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.765 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:50.765 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:50.765 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:50.765 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:50.765 14:06:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.765 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.766 [2024-06-10 14:06:05.032701] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.766 null0 00:37:50.766 [2024-06-10 14:06:05.127682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.766 [2024-06-10 14:06:05.151885] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1634686 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1634686 /var/tmp/bperf.sock 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1634686 ']' 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:50.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.766 14:06:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:50.766 [2024-06-10 14:06:05.205516] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:50.766 [2024-06-10 14:06:05.205581] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634686 ] 00:37:51.024 EAL: No free 2048 kB hugepages reported on node 1 00:37:51.024 [2024-06-10 14:06:05.315583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.024 [2024-06-10 14:06:05.401481] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:51.959 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:52.217 nvme0n1 00:37:52.217 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:52.217 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.217 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:52.217 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.217 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:52.217 14:06:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:52.476 Running I/O for 2 seconds... 00:37:52.476 [2024-06-10 14:06:06.802104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.802146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.802163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.816542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.816574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.816595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.830061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.830090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.830106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.840928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.840956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.840972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.856034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.856062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.856078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.867861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.867888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.867903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.882002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.882030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.882044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.893707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.893734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.893748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.905386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.905413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.905427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.919412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.919440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.919454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.931905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.931933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.931948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.476 [2024-06-10 14:06:06.943532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.476 [2024-06-10 14:06:06.943559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.476 [2024-06-10 14:06:06.943573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.735 [2024-06-10 14:06:06.956820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.735 [2024-06-10 14:06:06.956847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.735 [2024-06-10 14:06:06.956861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.735 [2024-06-10 14:06:06.970056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.735 [2024-06-10 14:06:06.970083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.735 [2024-06-10 14:06:06.970097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.735 [2024-06-10 14:06:06.982795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:06.982822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:06.982841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:06.994711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:06.994739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:06.994754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.007407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.007435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.007450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.019958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.019985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.020000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.033411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.033438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.033452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.044185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.044212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.044227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.060148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.060175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.060190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.071638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.071665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.071679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.086569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.086602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.086616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.098004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.098035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.098050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.110693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.110735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.123938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.123966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.123981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.137329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.137357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.137372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.148082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.148111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.148125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.162520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.162548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.162563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.175324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.175353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.175369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.186838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.186866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.186881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.736 [2024-06-10 14:06:07.200422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.736 [2024-06-10 14:06:07.200450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.736 [2024-06-10 14:06:07.200464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.212825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.212853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.212868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.225060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.225088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.225102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.238342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.238370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.238384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.250745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.250772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.250787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.262671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.262698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.262713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.275870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.275897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.275911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.288068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.288095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.288110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.301492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.301519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.301533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.314691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.314718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.314737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.326496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.326523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.326538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.340006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.340033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.340048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.351122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.351149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.351163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.364280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.364320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.376977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.377005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.377020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.389891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.389917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.389932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.401851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.401878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.401893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.415380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.995 [2024-06-10 14:06:07.415407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.995 [2024-06-10 14:06:07.415421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.995 [2024-06-10 14:06:07.427498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.996 [2024-06-10 14:06:07.427530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.996 [2024-06-10 14:06:07.427544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.996 [2024-06-10 14:06:07.440903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.996 [2024-06-10 14:06:07.440930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.996 [2024-06-10 14:06:07.440945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.996 [2024-06-10 14:06:07.453212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:52.996 [2024-06-10 14:06:07.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.996 [2024-06-10 14:06:07.453254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.996 [2024-06-10 14:06:07.465796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.254 [2024-06-10 14:06:07.465824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.254 [2024-06-10 14:06:07.465839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.254 [2024-06-10 14:06:07.478838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.254 [2024-06-10 14:06:07.478867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.254 [2024-06-10 14:06:07.478881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.254 [2024-06-10 14:06:07.489605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.254 [2024-06-10 14:06:07.489632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.254 [2024-06-10 14:06:07.489647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.254 [2024-06-10 14:06:07.504940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.254 [2024-06-10 14:06:07.504969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.254 [2024-06-10 14:06:07.504983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.254 [2024-06-10 14:06:07.517445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.517473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.528024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.528052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.528071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.541810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.541839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.541853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.555282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.555310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.555325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.567567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.567603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.567618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.580785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.580813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.580828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.593165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.593192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.593206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.604689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.604716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.604732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.618787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.618815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.618830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.629652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.629679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.629693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.644386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.644422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.644437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.655869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.655898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.655912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.669298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.669327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.669342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.681564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.681598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.681614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.695286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.695315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.695330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.707957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.707988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.708003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.255 [2024-06-10 14:06:07.719802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.255 [2024-06-10 14:06:07.719830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.255 [2024-06-10 14:06:07.719845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.733372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.733400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.733415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.746257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.746285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.746301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.758163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.758192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.758207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.771547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.771580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.771596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.783790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.783817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.783832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.795143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.795171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.795186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.809189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.809216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.809231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.823079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.823107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.823122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.834279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.834306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.834320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.848249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.848278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.848292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.860303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.860330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.860348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.872366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.872392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.872407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.886197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.886226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.886240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.899920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.899948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.899963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.911103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.911130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.911145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.925290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.925317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.514 [2024-06-10 14:06:07.925332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.514 [2024-06-10 14:06:07.936542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.514 [2024-06-10 14:06:07.936569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.515 [2024-06-10 14:06:07.936590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.515 [2024-06-10 14:06:07.949749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.515 [2024-06-10 14:06:07.949777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.515 [2024-06-10 14:06:07.949792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.515 [2024-06-10 14:06:07.962765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.515 [2024-06-10 14:06:07.962792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.515 [2024-06-10 14:06:07.962807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.515 [2024-06-10 14:06:07.975040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.515 [2024-06-10 14:06:07.975072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.515 [2024-06-10 14:06:07.975087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.773 [2024-06-10 14:06:07.988206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.773 [2024-06-10 14:06:07.988233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.773 [2024-06-10 14:06:07.988248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.773 [2024-06-10 14:06:08.001349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.773 [2024-06-10 14:06:08.001375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.773 [2024-06-10 14:06:08.001390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.773 [2024-06-10 14:06:08.013400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.773 [2024-06-10 14:06:08.013427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.773 [2024-06-10 14:06:08.013442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.773 [2024-06-10 14:06:08.025779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.773 [2024-06-10 14:06:08.025806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.773 [2024-06-10 14:06:08.025821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.773 [2024-06-10 14:06:08.038811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.773 [2024-06-10 14:06:08.038838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.773 [2024-06-10 14:06:08.038853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.773 [2024-06-10 14:06:08.051753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.051781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.051795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.065051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.065078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.065093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.075808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.075835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.075850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.090044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.090072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.090086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.102097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.102124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.102139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.116117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.116144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.116159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.128032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.128058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.128072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.139927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.139955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.139969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.153051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.153079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.166985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.167014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.167028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.177434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.177478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.190466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.190498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.190512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.205129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.205156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.205171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.216358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.216385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.216400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.229228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.229256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.229271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:53.774 [2024-06-10 14:06:08.242331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:53.774 [2024-06-10 14:06:08.242359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:53.774 [2024-06-10 14:06:08.242373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.254307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.254334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.254348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.268070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.268097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.268112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.279265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.279292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.279307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.292116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.292144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.292158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.305526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.305553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.305568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.318655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.318682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.318696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.330249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.330277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.330292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.343350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.343377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.343392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.355480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.355507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.355522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.368259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.368286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.368301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.380849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.380877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.380891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.393834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.393861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.393875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.405294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.405322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.405340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.418660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.418687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.418701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.431875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.431901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.431916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.444381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.444408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.444422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.457743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.457770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.457784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.469175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.469202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.469217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.482770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.482798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.482812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.033 [2024-06-10 14:06:08.495318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.033 [2024-06-10 14:06:08.495346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.033 [2024-06-10 14:06:08.495361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.507639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.507666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.507681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.519425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.519457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.519474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.533492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.533520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.533535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.543977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.544019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.558521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.558549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.558564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.571935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.571962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.571976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.582573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.582606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.292 [2024-06-10 14:06:08.582620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.292 [2024-06-10 14:06:08.597134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.292 [2024-06-10 14:06:08.597161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.597176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.608181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.608208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.608223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.622105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.622133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.622147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.635404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.635431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.635445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.646916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.646943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.646957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.660588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.660615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.660630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.672306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.672334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.672348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.685862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.685889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.685904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.696701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.696728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.696742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.710254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.710281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.710295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.722056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.722084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.722098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.736085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.736113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.749384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.749412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.749427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.293 [2024-06-10 14:06:08.761135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.293 [2024-06-10 14:06:08.761164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.293 [2024-06-10 14:06:08.761179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.551 [2024-06-10 14:06:08.775341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.551 [2024-06-10 14:06:08.775369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.552 [2024-06-10 14:06:08.775383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.552 [2024-06-10 14:06:08.786469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b779b0) 00:37:54.552 [2024-06-10 14:06:08.786497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.552 [2024-06-10 14:06:08.786511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:54.552 00:37:54.552 Latency(us) 00:37:54.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.552 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:54.552 nvme0n1 : 2.01 20022.63 78.21 0.00 0.00 6384.26 2949.12 16567.50 00:37:54.552 =================================================================================================================== 00:37:54.552 Total : 20022.63 78.21 0.00 0.00 6384.26 2949.12 16567.50 00:37:54.552 0 00:37:54.552 14:06:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:54.552 14:06:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:54.552 14:06:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:54.552 | .driver_specific 00:37:54.552 | .nvme_error 00:37:54.552 | .status_code 00:37:54.552 | .command_transient_transport_error' 00:37:54.552 14:06:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1634686 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1634686 ']' 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1634686 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1634686 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1634686' 00:37:54.810 killing process with pid 1634686 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1634686 00:37:54.810 Received shutdown signal, test time was about 2.000000 seconds 00:37:54.810 00:37:54.810 Latency(us) 00:37:54.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.810 =================================================================================================================== 00:37:54.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:54.810 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1634686 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1635594 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1635594 /var/tmp/bperf.sock 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1635594 ']' 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:55.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:55.068 14:06:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:55.069 [2024-06-10 14:06:09.350559] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:55.069 [2024-06-10 14:06:09.350639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635594 ] 00:37:55.069 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:55.069 Zero copy mechanism will not be used. 00:37:55.069 EAL: No free 2048 kB hugepages reported on node 1 00:37:55.069 [2024-06-10 14:06:09.460873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.375 [2024-06-10 14:06:09.548250] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.952 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:55.952 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:55.952 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:55.952 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:56.211 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:56.211 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:56.211 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:56.211 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:56.211 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:56.211 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:56.469 nvme0n1 00:37:56.469 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:56.469 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:56.469 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:56.469 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:56.469 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:56.469 14:06:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:56.469 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:56.469 Zero copy mechanism will not be used. 00:37:56.469 Running I/O for 2 seconds... 00:37:56.728 [2024-06-10 14:06:10.941581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:10.941625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:10.941642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:10.953914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:10.953947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:10.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:10.964104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:10.964135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:10.964150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:10.973272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:10.973300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:10.973315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:10.981860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:10.981889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:10.981903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:10.990901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:10.990932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:10.990947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:11.000291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:11.000321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:11.000336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:11.010111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:11.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:11.010155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:11.021921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:11.021949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.728 [2024-06-10 14:06:11.021964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.728 [2024-06-10 14:06:11.034838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.728 [2024-06-10 14:06:11.034865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.034880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.045287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.045316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.045331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.055987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.056015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.056030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.066033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.066061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.066076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.076511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.076540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.076559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.085758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.085787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.094763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.094791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.094806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.104836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.104864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.104880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.114094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.114123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.114138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.123974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.124002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.124017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.133148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.133177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.142808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.142837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.142852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.152115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.152144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.152159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.160613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.160643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.160658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.169092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.169121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.169136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.178469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.178500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.178515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.187455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.187484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.187499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.729 [2024-06-10 14:06:11.196341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.729 [2024-06-10 14:06:11.196370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.729 [2024-06-10 14:06:11.196385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.204726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.204754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.204769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.212888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.212918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.212932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.221142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.221170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.221186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.229327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.229356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.229375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.237598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.237627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.237643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.245933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.245963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.245978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.254210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.254239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.254254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.263181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.263209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.263224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.271710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.271737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.271751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.279976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.280006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.280021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.288165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.288193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.288208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.296313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.296341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.296357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.304457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.304491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.304506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.312675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.312704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.312719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.989 [2024-06-10 14:06:11.321003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.989 [2024-06-10 14:06:11.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.989 [2024-06-10 14:06:11.321047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.329235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.329265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.329280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.337420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.337449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.337463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.345603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.345631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.345646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.353764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.353793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.353807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.361938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.361966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.361980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.370114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.370142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.370157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.378427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.378455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.378470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.386606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.386635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.386649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.394923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.394951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.394966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.403095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.403124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.403139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.411228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.411256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.411271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.419402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.419431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.419445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.427582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.427625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.435696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.435724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.435739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.444124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.444152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.444171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.990 [2024-06-10 14:06:11.452433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:56.990 [2024-06-10 14:06:11.452464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.990 [2024-06-10 14:06:11.452479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.460747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.460776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.460790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.468980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.469009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.469024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.477150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.477179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.477194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.485370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.485398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.493667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.493694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.493708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.501840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.501869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.501884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.510137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.510166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.249 [2024-06-10 14:06:11.510181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.249 [2024-06-10 14:06:11.518407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.249 [2024-06-10 14:06:11.518440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.518455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.526596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.526625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.526640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.534830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.534860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.534874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.542993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.543022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.543037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.551180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.551209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.551224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.559487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.559517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.559531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.567819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.567848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.567863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.576204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.576233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.576248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.584388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.584416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.584431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.592809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.592837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.592851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.601062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.601091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.601106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.609215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.609242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.609257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.617415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.617444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.617459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.625793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.625822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.625836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.634973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.635002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.635017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.643311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.643340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.643355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.651570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.651604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.651618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.659780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.659809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.659827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.668120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.668149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.668163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.676326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.676354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.676369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.684535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.684562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.684584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.692819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.692847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.692862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.701050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.701080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.701095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.710274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.710304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.710319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.250 [2024-06-10 14:06:11.719643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.250 [2024-06-10 14:06:11.719671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.250 [2024-06-10 14:06:11.719686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.509 [2024-06-10 14:06:11.728642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.509 [2024-06-10 14:06:11.728674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.728689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.738624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.738653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.738668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.749010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.749040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.749055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.759899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.759930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.759944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.770727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.770757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.770772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.782291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.782320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.782335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.792864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.792893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.803842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.803872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.803887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.814920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.814950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.814965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.825747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.825778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.825797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.837180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.837210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.837225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.848212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.848243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.848258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.856457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.856486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.856501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.864656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.864684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.864699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.873041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.873070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.873084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.881447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.881475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.881490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.889753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.889782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.889797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.898116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.898145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.898159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.906471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.906504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.906519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.914827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.914855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.914869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.923154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.923183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.923197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.931509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.931538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.939902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.939930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.939945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.948176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.948205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.948219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.956428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.956456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.956471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.964675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.964703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.964718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.510 [2024-06-10 14:06:11.973087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.510 [2024-06-10 14:06:11.973116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.510 [2024-06-10 14:06:11.973130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:11.981290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:11.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:11.981333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:11.989500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:11.989528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:11.989542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:11.997762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:11.997790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:11.997804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.006032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.006061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.006075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.014204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.014234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.022526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.022555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.022569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.030815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.030844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.030858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.039110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.039139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.039153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.047396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.047426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.047448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.055748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.055777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.055792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.064122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.064151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.064165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.072554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.072591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.072608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.080924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.080954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.080970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.089301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.089329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.089343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.097839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.097868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.097882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.106120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.106149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.106164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.114402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.770 [2024-06-10 14:06:12.114431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.770 [2024-06-10 14:06:12.114445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.770 [2024-06-10 14:06:12.122874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.122907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.122921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.131217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.131246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.131260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.139566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.139604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.139619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.147879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.147908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.147922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.156313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.156341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.156355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.164535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.164564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.164585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.172843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.172872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.172886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.181117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.181146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.181161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.189523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.189552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.189566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.197840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.197869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.197884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.206132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.206161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.206175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.214445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.214474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.214488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.222628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.222656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.222670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.230883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.230913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.230927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.771 [2024-06-10 14:06:12.239140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:57.771 [2024-06-10 14:06:12.239169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.771 [2024-06-10 14:06:12.239184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.030 [2024-06-10 14:06:12.247325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.030 [2024-06-10 14:06:12.247354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.030 [2024-06-10 14:06:12.247368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.030 [2024-06-10 14:06:12.255602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.030 [2024-06-10 14:06:12.255631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.030 [2024-06-10 14:06:12.255646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.030 [2024-06-10 14:06:12.263976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.030 [2024-06-10 14:06:12.264005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.030 [2024-06-10 14:06:12.264024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.030 [2024-06-10 14:06:12.272372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.272401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.272416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.280830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.280859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.280875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.289117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.289147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.289162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.297525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.297555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.297570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.305856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.305885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.305900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.314078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.314107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.314122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.322354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.322383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.322398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.330624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.330653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.330667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.338919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.338948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.338963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.347233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.347261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.347276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.355442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.355471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.355487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.365068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.365098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.365118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.375038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.375067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.384264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.384308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.394642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.394672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.394687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.404934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.404965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.404979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.414273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.414302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.414321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.423313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.423346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.423361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.432868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.432899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.432913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.441876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.441905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.441919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.450871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.450900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.450915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.460394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.460424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.460439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.470695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.470725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.470740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.481098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.481126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.481141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.031 [2024-06-10 14:06:12.491945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.031 [2024-06-10 14:06:12.491976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.031 [2024-06-10 14:06:12.491991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.501792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.501826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.501841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.511254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.511283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.511298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.519879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.519909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.519924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.529146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.529176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.529191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.537976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.538007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.538021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.546180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.546209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.554486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.290 [2024-06-10 14:06:12.554516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.290 [2024-06-10 14:06:12.554530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.290 [2024-06-10 14:06:12.562880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.562909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.562924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.571234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.571263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.571277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.579626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.579654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.579669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.587948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.587978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.587992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.596086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.596115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.596130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.604449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.604477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.604492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.612674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.612703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.612717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.620949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.620993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.629356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.629385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.629400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.637617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.637646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.637661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.645901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.645930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.654201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.654229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.654244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.662419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.662448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.662462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.670636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.670664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.678808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.678837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.678852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.687008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.687037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.687052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.695243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.695272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.695287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.703382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.703410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.703425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.711574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.711610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.711624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.719781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.719813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.728016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.728045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.736268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.736296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.736311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.744500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.744529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.744544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.291 [2024-06-10 14:06:12.752794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.291 [2024-06-10 14:06:12.752822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.291 [2024-06-10 14:06:12.752836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.761129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.761157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.761172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.769356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.769385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.769399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.777552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.777588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.777603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.785772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.785801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.785816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.794088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.794131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.802267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.802298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.810508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.810537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.810551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.818806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.818834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.818849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.827083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.827112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.827127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.835259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.835287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.835301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.843452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.843481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.843495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.851848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.851877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.851892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.860160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.860189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.860207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.868334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.868363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.868377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.876731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.876760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.876775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.885058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.885087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.885101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.893319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.893347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.893362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.901526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.901555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.901570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.909824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.909853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.909868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.918136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.918164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.918179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:58.550 [2024-06-10 14:06:12.926356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x225c280) 00:37:58.550 [2024-06-10 14:06:12.926385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:58.550 [2024-06-10 14:06:12.926400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:58.550 00:37:58.550 Latency(us) 00:37:58.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:58.550 nvme0n1 : 2.00 3553.29 444.16 0.00 0.00 4498.11 1251.74 13736.35 00:37:58.550 =================================================================================================================== 00:37:58.550 Total : 3553.29 444.16 0.00 0.00 4498.11 1251.74 13736.35 00:37:58.550 0 00:37:58.550 14:06:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:58.550 14:06:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:58.550 14:06:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:58.550 14:06:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:58.550 | .driver_specific 00:37:58.550 | .nvme_error 00:37:58.550 | .status_code 00:37:58.550 | .command_transient_transport_error' 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 229 > 0 )) 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1635594 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1635594 ']' 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1635594 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1635594 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1635594' 00:37:58.809 killing process with pid 1635594 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1635594 00:37:58.809 Received shutdown signal, test time was about 2.000000 seconds 00:37:58.809 00:37:58.809 Latency(us) 00:37:58.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.809 =================================================================================================================== 00:37:58.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:58.809 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1635594 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1636177 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1636177 /var/tmp/bperf.sock 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1636177 ']' 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:59.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:59.068 14:06:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:59.068 [2024-06-10 14:06:13.488401] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:37:59.068 [2024-06-10 14:06:13.488464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636177 ] 00:37:59.327 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.327 [2024-06-10 14:06:13.599840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.327 [2024-06-10 14:06:13.676188] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.260 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:00.260 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:38:00.260 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:00.261 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:00.518 nvme0n1 00:38:00.518 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:00.518 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.518 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:00.518 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.518 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:00.518 14:06:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:00.777 Running I/O for 2 seconds... 00:38:00.777 [2024-06-10 14:06:15.060944] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.061209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.061248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.073916] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.074197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.074227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.086881] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.087150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.087178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.099804] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.100076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.100104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.112916] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.113186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.113212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.125802] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.126071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.126097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.138700] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.138971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.138997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.151560] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.151838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.151864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.164422] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.164698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.164724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.177331] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.177596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.177622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.190170] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.190440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.190467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.203083] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.203352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.203378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.215906] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.216199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.228835] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.229103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.229129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:00.777 [2024-06-10 14:06:15.241679] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:00.777 [2024-06-10 14:06:15.241949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.777 [2024-06-10 14:06:15.241975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.254565] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.254839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.254865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.267453] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.267728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.267754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.280322] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.280594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.280620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.293215] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.293482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.293508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.306092] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.306359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.306385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.318985] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.319253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.319279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.331873] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.332142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.332170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.344794] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.345058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.345084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.357668] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.357935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.357962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.370558] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.370831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.370856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.383429] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.383703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.383728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.396326] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.396592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.396617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.409197] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.409464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.409495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.422052] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.422323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.422349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.434939] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.435210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.435235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.036 [2024-06-10 14:06:15.447846] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.036 [2024-06-10 14:06:15.448113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.036 [2024-06-10 14:06:15.448140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.037 [2024-06-10 14:06:15.460735] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.037 [2024-06-10 14:06:15.461005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.037 [2024-06-10 14:06:15.461031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.037 [2024-06-10 14:06:15.473620] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.037 [2024-06-10 14:06:15.473886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.037 [2024-06-10 14:06:15.473911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.037 [2024-06-10 14:06:15.486487] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.037 [2024-06-10 14:06:15.486766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.037 [2024-06-10 14:06:15.486792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.037 [2024-06-10 14:06:15.499405] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.037 [2024-06-10 14:06:15.499679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.037 [2024-06-10 14:06:15.499704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.512342] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.512637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.525209] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.525481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.525506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.538086] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.538355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.538381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.550976] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.551247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.551273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.563863] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.564130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.564155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.576738] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.577005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.589630] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.589896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.589922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.602509] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.295 [2024-06-10 14:06:15.602787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.295 [2024-06-10 14:06:15.602813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.295 [2024-06-10 14:06:15.615387] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.615657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.615683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.628564] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.628843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.628868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.641449] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.641723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.641749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.654352] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.654624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.654649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.667214] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.667481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.667507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.680091] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.680358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.680382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.693004] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.693271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.693297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.705883] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.706151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.706176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.718764] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.719032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.719057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.731670] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.731939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.731964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.744556] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.744830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.744858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.296 [2024-06-10 14:06:15.757425] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.296 [2024-06-10 14:06:15.757703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.296 [2024-06-10 14:06:15.757728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.770283] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.770549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.770581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.783206] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.783471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.783496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.796098] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.796369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.796394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.809019] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.809287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.809312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.821916] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.822216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.834799] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.835066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.835091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.847683] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.847953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.847978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.860567] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.860846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.860876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.873458] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.873734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.873758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.886346] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.886613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.886638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.899234] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.899499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.899524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.912105] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.912375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.912400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.924970] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.925238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.925263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.937819] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.938086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.938110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.950680] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.950948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.950974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.963567] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.963839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.963864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.976426] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.976710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.976736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:15.989318] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:15.989586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:15.989612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:16.002203] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:16.002472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:16.002497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.555 [2024-06-10 14:06:16.015271] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.555 [2024-06-10 14:06:16.015541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.555 [2024-06-10 14:06:16.015565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.028138] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.028407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.028432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.041000] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.041266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.041291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.053899] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.054169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.054195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.066786] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.067053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.067077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.079651] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.079920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.079945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.092514] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.092789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.092814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.105364] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.105633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.105658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.118461] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.118735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.118760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.131324] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.131598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.131623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.144213] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.144482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.144507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.157063] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.157331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.157356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.169946] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.170216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.170242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.182800] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.183067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.183093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.195696] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.195965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.195994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.208551] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.208823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.208849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.221417] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.221710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.234283] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.234551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.234582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.247178] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.247443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.247468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.260031] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.260298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.260323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:01.814 [2024-06-10 14:06:16.272861] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:01.814 [2024-06-10 14:06:16.273128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.814 [2024-06-10 14:06:16.273154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.072 [2024-06-10 14:06:16.285758] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.072 [2024-06-10 14:06:16.286023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.072 [2024-06-10 14:06:16.286048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.072 [2024-06-10 14:06:16.298611] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.298880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.298905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.311491] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.311771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.311796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.324330] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.324599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.324623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.337237] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.337504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.337529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.350139] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.350409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.350435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.363024] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.363295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.375879] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.376148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.376173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.388773] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.389042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.389066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.401645] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.401912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.401937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.414494] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.414767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.414792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.427369] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.427636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.427660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.440229] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.440495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.440521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.453087] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.453354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.453378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.465975] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.466241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.466267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.478838] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.479103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.479128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.491710] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.491980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.492005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.504558] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.504831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.504857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.517424] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.517699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.517723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.530286] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.073 [2024-06-10 14:06:16.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.073 [2024-06-10 14:06:16.530582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.073 [2024-06-10 14:06:16.543131] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.543401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.331 [2024-06-10 14:06:16.543426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.331 [2024-06-10 14:06:16.556014] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.556283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.331 [2024-06-10 14:06:16.556308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.331 [2024-06-10 14:06:16.568868] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.569135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.331 [2024-06-10 14:06:16.569161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.331 [2024-06-10 14:06:16.581730] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.581999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.331 [2024-06-10 14:06:16.582024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.331 [2024-06-10 14:06:16.594587] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.594857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.331 [2024-06-10 14:06:16.594882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.331 [2024-06-10 14:06:16.607463] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.607738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.331 [2024-06-10 14:06:16.607763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.331 [2024-06-10 14:06:16.620318] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.331 [2024-06-10 14:06:16.620592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.620617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.633476] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.633752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.633778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.646349] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.646617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.646646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.659229] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.659498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.659523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.672100] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.672368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.672393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.684962] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.685228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.685254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.697876] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.698144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.698169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.710743] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.711012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.711037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.723595] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.723860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.723885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.736440] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.736713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.736738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.749311] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.749583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.749608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.762163] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.762433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.762459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.775018] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.775283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.775308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.787895] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.788163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.332 [2024-06-10 14:06:16.800772] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.332 [2024-06-10 14:06:16.801040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.332 [2024-06-10 14:06:16.801065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.813634] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.813902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.813927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.826489] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.826766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.826792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.839335] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.839602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.839627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.852212] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.852482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.852507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.865064] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.865333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.865357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.877895] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.878164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.878190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.890827] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.891095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.891121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.903684] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.903952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.903978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.916588] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.916857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.916882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.929454] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.929728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.929754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.942360] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.942630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.942656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.955223] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.955493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.955519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.968173] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.968420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.968445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.981123] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.981391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.981420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:16.994042] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:16.994313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:16.994339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:17.006949] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:17.007219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:17.007245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:17.019879] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:17.020148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:17.020173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:17.032777] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:17.033046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:17.033071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 [2024-06-10 14:06:17.045702] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5900) with pdu=0x2000190fdeb0 00:38:02.591 [2024-06-10 14:06:17.045973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:02.591 [2024-06-10 14:06:17.045999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:02.591 00:38:02.591 Latency(us) 00:38:02.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.592 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.592 nvme0n1 : 2.01 19735.35 77.09 0.00 0.00 6471.32 5898.24 16986.93 00:38:02.592 =================================================================================================================== 00:38:02.592 Total : 19735.35 77.09 0.00 0.00 6471.32 5898.24 16986.93 00:38:02.592 0 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:02.850 | .driver_specific 00:38:02.850 | .nvme_error 00:38:02.850 | .status_code 00:38:02.850 | .command_transient_transport_error' 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1636177 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1636177 ']' 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1636177 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1636177 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1636177' 00:38:02.850 killing process with pid 1636177 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1636177 00:38:02.850 Received shutdown signal, test time was about 2.000000 seconds 00:38:02.850 00:38:02.850 Latency(us) 00:38:02.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.850 =================================================================================================================== 00:38:02.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:02.850 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1636177 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1636976 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1636976 /var/tmp/bperf.sock 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1636976 ']' 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:03.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:03.108 14:06:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:03.108 [2024-06-10 14:06:17.568562] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:03.108 [2024-06-10 14:06:17.568632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636976 ] 00:38:03.108 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:03.108 Zero copy mechanism will not be used. 00:38:03.366 EAL: No free 2048 kB hugepages reported on node 1 00:38:03.366 [2024-06-10 14:06:17.679098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.366 [2024-06-10 14:06:17.762315] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:04.300 14:06:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:04.866 nvme0n1 00:38:04.866 14:06:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:04.866 14:06:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.866 14:06:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:04.866 14:06:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.866 14:06:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:04.866 14:06:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:04.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:04.866 Zero copy mechanism will not be used. 00:38:04.866 Running I/O for 2 seconds... 00:38:04.866 [2024-06-10 14:06:19.314068] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:04.866 [2024-06-10 14:06:19.314556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:04.866 [2024-06-10 14:06:19.314600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:04.866 [2024-06-10 14:06:19.326404] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:04.866 [2024-06-10 14:06:19.326853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:04.866 [2024-06-10 14:06:19.326885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:04.866 [2024-06-10 14:06:19.335530] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:04.866 [2024-06-10 14:06:19.335978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:04.866 [2024-06-10 14:06:19.336008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.344731] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.344828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.344855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.353833] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.354263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.354291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.364634] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.365126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.365153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.374753] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.375265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.384528] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.385012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.385039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.394645] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.395075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.395101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.404732] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.405211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.405239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.413571] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.414018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.414045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.421459] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.421872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.421899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.430142] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.430595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.438436] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.438892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.438920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.453206] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.453659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.453686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.463142] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.463564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.463597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.471862] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.472343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.472370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.481018] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.481432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.481459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.489303] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.489799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.489826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.503872] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.504325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.504353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.515012] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.515435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.515463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.527195] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.527849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.527881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.538363] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.538858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.125 [2024-06-10 14:06:19.538886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.125 [2024-06-10 14:06:19.548682] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.125 [2024-06-10 14:06:19.549171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.126 [2024-06-10 14:06:19.549198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.126 [2024-06-10 14:06:19.557943] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.126 [2024-06-10 14:06:19.558378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.126 [2024-06-10 14:06:19.558405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.126 [2024-06-10 14:06:19.567392] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.126 [2024-06-10 14:06:19.567832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.126 [2024-06-10 14:06:19.567859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.126 [2024-06-10 14:06:19.575679] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.126 [2024-06-10 14:06:19.576177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.126 [2024-06-10 14:06:19.576204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.126 [2024-06-10 14:06:19.587492] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.126 [2024-06-10 14:06:19.587988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.126 [2024-06-10 14:06:19.588015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.600703] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.601175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.609962] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.610399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.610425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.624732] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.625181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.625208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.638421] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.638899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.638927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.650966] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.651418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.651446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.665339] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.665823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.665850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.679289] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.679765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.679793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.691425] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.691882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.691909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.706889] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.707325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.707352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.717330] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.717760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.717787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.727241] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.727684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.384 [2024-06-10 14:06:19.727715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.384 [2024-06-10 14:06:19.738912] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.384 [2024-06-10 14:06:19.739500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.739527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.752477] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.752915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.752942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.766352] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.766809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.766836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.776323] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.776764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.776791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.785388] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.785829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.785855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.794342] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.794833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.794859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.803821] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.803971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.803995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.812495] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.812926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.820770] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.821163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.821189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.829713] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.830116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.830143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.838281] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.838692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.838718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.385 [2024-06-10 14:06:19.845917] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.385 [2024-06-10 14:06:19.846318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.385 [2024-06-10 14:06:19.846343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.855028] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.855457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.855484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.864000] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.864477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.864504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.877181] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.877645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.877672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.886895] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.887285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.887312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.895408] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.895891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.895918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.903690] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.904085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.904112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.911779] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.912205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.912232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.920202] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.920603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.920630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.930745] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.931390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.931417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.943091] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.943636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.943663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.952824] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.953219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.953246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.960851] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.961245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.961271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.969236] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.969740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.969767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.977941] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.978349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.978381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.986676] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.987158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.987185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:19.995621] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:19.996007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:19.996034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:20.009038] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:20.009937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:20.010044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:20.019934] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:20.020365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:20.020392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:20.027965] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.644 [2024-06-10 14:06:20.028348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.644 [2024-06-10 14:06:20.028376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.644 [2024-06-10 14:06:20.036215] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.036659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.036686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.044391] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.044813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.044840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.053489] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.053912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.053939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.061526] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.061946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.061975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.069761] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.070265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.076772] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.077230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.077258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.084734] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.085112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.085139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.092406] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.092806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.092833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.100731] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.101286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.101312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.645 [2024-06-10 14:06:20.108780] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.645 [2024-06-10 14:06:20.109314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.645 [2024-06-10 14:06:20.109341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.904 [2024-06-10 14:06:20.117271] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.904 [2024-06-10 14:06:20.117757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.904 [2024-06-10 14:06:20.117784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.904 [2024-06-10 14:06:20.125671] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.904 [2024-06-10 14:06:20.126085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.904 [2024-06-10 14:06:20.126112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.904 [2024-06-10 14:06:20.134022] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.904 [2024-06-10 14:06:20.134484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.904 [2024-06-10 14:06:20.134510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.904 [2024-06-10 14:06:20.141941] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.142428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.142456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.150203] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.150668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.150695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.158615] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.159056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.159082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.166635] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.167139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.167165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.174343] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.174814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.174840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.182149] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.182620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.182648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.190564] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.190963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.190990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.198706] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.199107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.199138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.206065] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.206461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.206488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.213868] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.214264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.214291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.221382] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.221778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.221806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.228790] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.229183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.229210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.236688] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.237072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.237099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.244536] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.244949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.244976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.252050] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.252444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.252471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.260036] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.260454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.260481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.267531] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.267931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.275140] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.275538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.283004] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.283423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.283450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.291188] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.291603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.291630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.299206] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.299607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.299635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.307178] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.307583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.307611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.315305] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.315712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.315738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.323270] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.323676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.323714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.332342] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.332852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.332879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.341498] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.342030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.342058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.351247] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.905 [2024-06-10 14:06:20.351694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.905 [2024-06-10 14:06:20.351722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:05.905 [2024-06-10 14:06:20.360253] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.906 [2024-06-10 14:06:20.360776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.906 [2024-06-10 14:06:20.360803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:05.906 [2024-06-10 14:06:20.369513] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:05.906 [2024-06-10 14:06:20.369944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.906 [2024-06-10 14:06:20.369972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.378983] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.379394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.379421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.388109] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.388612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.388639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.397753] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.398214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.398241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.407119] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.407620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.407647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.416332] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.416855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.416887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.425919] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.426340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.426367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.433812] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.434209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.165 [2024-06-10 14:06:20.434235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.165 [2024-06-10 14:06:20.442268] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.165 [2024-06-10 14:06:20.442672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.442699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.450483] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.450883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.450910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.458530] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.458941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.458968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.466592] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.467006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.467033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.473570] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.473973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.474000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.480724] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.481264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.481292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.488971] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.489388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.489415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.497233] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.497625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.497652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.505360] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.505779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.505807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.513057] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.513493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.513520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.521181] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.521640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.521667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.529187] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.529573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.529605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.536997] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.537397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.537425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.545388] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.545792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.545820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.553287] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.553840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.553871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.561461] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.561858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.561886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.569141] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.569549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.569582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.578136] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.578602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.578630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.586696] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.587096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.587122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.596247] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.596719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.596745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.605393] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.605876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.605902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.614183] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.614616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.614644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.622663] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.623109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.623136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.166 [2024-06-10 14:06:20.631550] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.166 [2024-06-10 14:06:20.631956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.166 [2024-06-10 14:06:20.631984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.640834] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.641400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.641427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.650341] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.650800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.650826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.658869] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.659303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.659330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.668071] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.668618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.668645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.677352] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.677818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.677846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.685689] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.686091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.686118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.693937] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.694358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.694385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.702051] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.702469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.702496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.710119] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.710517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.710544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.718568] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.426 [2024-06-10 14:06:20.719087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.426 [2024-06-10 14:06:20.719114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.426 [2024-06-10 14:06:20.727679] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.728087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.728114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.735854] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.736249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.736275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.744188] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.744584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.744611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.752450] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.752857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.752885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.760628] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.761071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.761097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.768783] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.769167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.769193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.777476] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.777894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.777925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.785491] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.785889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.785915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.793636] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.794031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.794057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.801994] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.802434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.802460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.810163] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.810562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.810595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.818353] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.818743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.818770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.826307] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.826768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.826795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.835026] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.835519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.835546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.844138] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.844642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.844669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.852678] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.853202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.853228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.862395] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.862883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.862910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.870827] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.871295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.871321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.878662] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.879052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.879079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.886792] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.887231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.887258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.427 [2024-06-10 14:06:20.894942] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.427 [2024-06-10 14:06:20.895347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.427 [2024-06-10 14:06:20.895374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.903637] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.687 [2024-06-10 14:06:20.904127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.687 [2024-06-10 14:06:20.904154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.913270] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.687 [2024-06-10 14:06:20.913810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.687 [2024-06-10 14:06:20.913838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.921834] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.687 [2024-06-10 14:06:20.922351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.687 [2024-06-10 14:06:20.922378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.931153] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.687 [2024-06-10 14:06:20.931671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.687 [2024-06-10 14:06:20.931697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.940550] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.687 [2024-06-10 14:06:20.940955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.687 [2024-06-10 14:06:20.940982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.948786] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.687 [2024-06-10 14:06:20.949270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.687 [2024-06-10 14:06:20.949297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.687 [2024-06-10 14:06:20.958583] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:20.959097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:20.959124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:20.967312] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:20.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:20.967773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:20.975302] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:20.975703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:20.975730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:20.983060] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:20.983492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:20.983520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:20.991400] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:20.991868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:20.991895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:20.999761] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.000308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.000340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.008428] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.008894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.008922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.016811] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.017285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.017311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.024717] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.025151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.032643] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.033078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.033104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.040274] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.040654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.040681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.048027] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.048545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.048572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.055843] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.056210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.056237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.063218] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.063598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.063625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.071543] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.071930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.071956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.079725] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.080102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.080129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.087294] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.087672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.087698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.095368] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.095781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.095808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.102929] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.103297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.103324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.111265] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.111662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.118693] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.119077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.119103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.126813] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.127180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.127207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.134512] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.134898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.134929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.142437] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.142815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.142842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.150427] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.688 [2024-06-10 14:06:21.150802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.688 [2024-06-10 14:06:21.150829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.688 [2024-06-10 14:06:21.157859] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.948 [2024-06-10 14:06:21.158234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.948 [2024-06-10 14:06:21.158261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.948 [2024-06-10 14:06:21.165802] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.948 [2024-06-10 14:06:21.166174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.948 [2024-06-10 14:06:21.166201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.948 [2024-06-10 14:06:21.173570] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.173958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.173985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.181425] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.181854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.181881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.189728] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.190110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.190136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.197450] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.197817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.204941] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.205324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.205350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.213434] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.213890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.213916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.221624] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.222030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.222056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.229424] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.229793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.229819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.237343] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.237738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.237765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.246205] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.246668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.246696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.254563] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.254943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.254970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.262511] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.262886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.262913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.270356] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.270766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.270793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.277797] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.278202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.278229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.285426] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.285837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.285863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.292717] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.293088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.293115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:06.949 [2024-06-10 14:06:21.299787] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20e5b10) with pdu=0x2000190fef90 00:38:06.949 [2024-06-10 14:06:21.300000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:06.949 [2024-06-10 14:06:21.300027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:06.949 00:38:06.949 Latency(us) 00:38:06.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.949 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:06.949 nvme0n1 : 2.00 3468.42 433.55 0.00 0.00 4605.34 3080.19 14680.06 00:38:06.949 =================================================================================================================== 00:38:06.949 Total : 3468.42 433.55 0.00 0.00 4605.34 3080.19 14680.06 00:38:06.949 0 00:38:06.949 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:06.949 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:06.949 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:06.949 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:06.949 | .driver_specific 00:38:06.949 | .nvme_error 00:38:06.949 | .status_code 00:38:06.949 | .command_transient_transport_error' 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1636976 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1636976 ']' 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1636976 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1636976 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1636976' 00:38:07.208 killing process with pid 1636976 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1636976 00:38:07.208 Received shutdown signal, test time was about 2.000000 seconds 00:38:07.208 00:38:07.208 Latency(us) 00:38:07.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.208 =================================================================================================================== 00:38:07.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:07.208 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1636976 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1634162 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1634162 ']' 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1634162 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1634162 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1634162' 00:38:07.468 killing process with pid 1634162 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1634162 00:38:07.468 14:06:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1634162 00:38:07.727 00:38:07.727 real 0m18.061s 00:38:07.727 user 0m35.127s 00:38:07.727 sys 0m5.130s 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:07.727 ************************************ 00:38:07.727 END TEST nvmf_digest_error 00:38:07.727 ************************************ 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:07.727 rmmod nvme_tcp 00:38:07.727 rmmod nvme_fabrics 00:38:07.727 rmmod nvme_keyring 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1634162 ']' 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1634162 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1634162 ']' 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1634162 00:38:07.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1634162) - No such process 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1634162 is not found' 00:38:07.727 Process with pid 1634162 is not found 00:38:07.727 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:07.986 14:06:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.890 14:06:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:09.890 00:38:09.890 real 0m47.296s 00:38:09.890 user 1m12.850s 00:38:09.890 sys 0m17.075s 00:38:09.890 14:06:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:09.890 14:06:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:09.890 ************************************ 00:38:09.890 END TEST nvmf_digest 00:38:09.890 ************************************ 00:38:09.890 14:06:24 nvmf_tcp -- nvmf/nvmf.sh@112 -- # [[ 0 -eq 1 ]] 00:38:09.890 14:06:24 nvmf_tcp -- nvmf/nvmf.sh@117 -- # [[ 0 -eq 1 ]] 00:38:09.890 14:06:24 nvmf_tcp -- nvmf/nvmf.sh@122 -- # [[ phy == phy ]] 00:38:09.890 14:06:24 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:09.890 14:06:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:09.890 14:06:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:09.890 14:06:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:10.150 ************************************ 00:38:10.150 START TEST nvmf_bdevperf 00:38:10.150 ************************************ 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:10.150 * Looking for test storage... 00:38:10.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:38:10.150 14:06:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.123 14:06:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:20.123 14:06:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:38:20.123 14:06:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:20.123 14:06:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:38:20.123 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:20.124 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:20.124 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:20.124 Found net devices under 0000:af:00.0: cvl_0_0 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:20.124 Found net devices under 0000:af:00.1: cvl_0_1 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:20.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:20.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:38:20.124 00:38:20.124 --- 10.0.0.2 ping statistics --- 00:38:20.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.124 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:20.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:20.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:38:20.124 00:38:20.124 --- 10.0.0.1 ping statistics --- 00:38:20.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.124 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1641983 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1641983 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1641983 ']' 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:20.124 14:06:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.124 [2024-06-10 14:06:33.411197] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:20.125 [2024-06-10 14:06:33.411255] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.125 EAL: No free 2048 kB hugepages reported on node 1 00:38:20.125 [2024-06-10 14:06:33.530289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:20.125 [2024-06-10 14:06:33.614990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.125 [2024-06-10 14:06:33.615038] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.125 [2024-06-10 14:06:33.615052] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.125 [2024-06-10 14:06:33.615064] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.125 [2024-06-10 14:06:33.615074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.125 [2024-06-10 14:06:33.615190] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:38:20.125 [2024-06-10 14:06:33.615301] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.125 [2024-06-10 14:06:33.615301] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.125 [2024-06-10 14:06:34.376148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.125 Malloc0 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:20.125 [2024-06-10 14:06:34.438356] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:20.125 { 00:38:20.125 "params": { 00:38:20.125 "name": "Nvme$subsystem", 00:38:20.125 "trtype": "$TEST_TRANSPORT", 00:38:20.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.125 "adrfam": "ipv4", 00:38:20.125 "trsvcid": "$NVMF_PORT", 00:38:20.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.125 "hdgst": ${hdgst:-false}, 00:38:20.125 "ddgst": ${ddgst:-false} 00:38:20.125 }, 00:38:20.125 "method": "bdev_nvme_attach_controller" 00:38:20.125 } 00:38:20.125 EOF 00:38:20.125 )") 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:20.125 14:06:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:20.125 "params": { 00:38:20.125 "name": "Nvme1", 00:38:20.125 "trtype": "tcp", 00:38:20.125 "traddr": "10.0.0.2", 00:38:20.125 "adrfam": "ipv4", 00:38:20.125 "trsvcid": "4420", 00:38:20.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.125 "hdgst": false, 00:38:20.125 "ddgst": false 00:38:20.125 }, 00:38:20.125 "method": "bdev_nvme_attach_controller" 00:38:20.125 }' 00:38:20.125 [2024-06-10 14:06:34.495091] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:20.125 [2024-06-10 14:06:34.495155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642258 ] 00:38:20.125 EAL: No free 2048 kB hugepages reported on node 1 00:38:20.383 [2024-06-10 14:06:34.615912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.383 [2024-06-10 14:06:34.697583] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.641 Running I/O for 1 seconds... 00:38:21.573 00:38:21.573 Latency(us) 00:38:21.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.573 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:21.573 Verification LBA range: start 0x0 length 0x4000 00:38:21.573 Nvme1n1 : 1.01 8151.59 31.84 0.00 0.00 15638.73 2791.83 19713.23 00:38:21.573 =================================================================================================================== 00:38:21.573 Total : 8151.59 31.84 0.00 0.00 15638.73 2791.83 19713.23 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1642530 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:21.831 { 00:38:21.831 "params": { 00:38:21.831 "name": "Nvme$subsystem", 00:38:21.831 "trtype": "$TEST_TRANSPORT", 00:38:21.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:21.831 "adrfam": "ipv4", 00:38:21.831 "trsvcid": "$NVMF_PORT", 00:38:21.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:21.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:21.831 "hdgst": ${hdgst:-false}, 00:38:21.831 "ddgst": ${ddgst:-false} 00:38:21.831 }, 00:38:21.831 "method": "bdev_nvme_attach_controller" 00:38:21.831 } 00:38:21.831 EOF 00:38:21.831 )") 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:21.831 14:06:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:21.831 "params": { 00:38:21.831 "name": "Nvme1", 00:38:21.831 "trtype": "tcp", 00:38:21.831 "traddr": "10.0.0.2", 00:38:21.831 "adrfam": "ipv4", 00:38:21.831 "trsvcid": "4420", 00:38:21.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:21.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:21.831 "hdgst": false, 00:38:21.831 "ddgst": false 00:38:21.831 }, 00:38:21.831 "method": "bdev_nvme_attach_controller" 00:38:21.831 }' 00:38:21.831 [2024-06-10 14:06:36.158761] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:21.831 [2024-06-10 14:06:36.158824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642530 ] 00:38:21.831 EAL: No free 2048 kB hugepages reported on node 1 00:38:21.831 [2024-06-10 14:06:36.281726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.089 [2024-06-10 14:06:36.359322] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.089 Running I/O for 15 seconds... 00:38:24.646 14:06:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1641983 00:38:24.646 14:06:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:24.908 [2024-06-10 14:06:39.129411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.129976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.129989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.908 [2024-06-10 14:06:39.130369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.908 [2024-06-10 14:06:39.130382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.130672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.130982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.130997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.909 [2024-06-10 14:06:39.131175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.909 [2024-06-10 14:06:39.131488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.909 [2024-06-10 14:06:39.131503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.131986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.131999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.910 [2024-06-10 14:06:39.132629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.910 [2024-06-10 14:06:39.132644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.911 [2024-06-10 14:06:39.132656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.911 [2024-06-10 14:06:39.132684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:24.911 [2024-06-10 14:06:39.132711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.132994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.133008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.133022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.133049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.133064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.133079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.133094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.911 [2024-06-10 14:06:39.133106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.133120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f7570 is same with the state(5) to be set 00:38:24.911 [2024-06-10 14:06:39.133136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:24.911 [2024-06-10 14:06:39.133147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:24.911 [2024-06-10 14:06:39.133158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39464 len:8 PRP1 0x0 PRP2 0x0 00:38:24.911 [2024-06-10 14:06:39.133172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.911 [2024-06-10 14:06:39.133224] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19f7570 was disconnected and freed. reset controller. 00:38:24.911 [2024-06-10 14:06:39.136967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.911 [2024-06-10 14:06:39.137032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.911 [2024-06-10 14:06:39.137723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.911 [2024-06-10 14:06:39.137747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.911 [2024-06-10 14:06:39.137761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.911 [2024-06-10 14:06:39.138000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.911 [2024-06-10 14:06:39.138237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.911 [2024-06-10 14:06:39.138251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.911 [2024-06-10 14:06:39.138265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.911 [2024-06-10 14:06:39.141999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.911 [2024-06-10 14:06:39.151464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.911 [2024-06-10 14:06:39.152046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.911 [2024-06-10 14:06:39.152071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.911 [2024-06-10 14:06:39.152085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.911 [2024-06-10 14:06:39.152322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.911 [2024-06-10 14:06:39.152560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.911 [2024-06-10 14:06:39.152585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.911 [2024-06-10 14:06:39.152599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.911 [2024-06-10 14:06:39.156327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.911 [2024-06-10 14:06:39.165564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.911 [2024-06-10 14:06:39.166128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.911 [2024-06-10 14:06:39.166156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.911 [2024-06-10 14:06:39.166170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.911 [2024-06-10 14:06:39.166407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.911 [2024-06-10 14:06:39.166662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.911 [2024-06-10 14:06:39.166678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.911 [2024-06-10 14:06:39.166692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.911 [2024-06-10 14:06:39.170417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.911 [2024-06-10 14:06:39.179655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.911 [2024-06-10 14:06:39.180241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.911 [2024-06-10 14:06:39.180265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.911 [2024-06-10 14:06:39.180279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.911 [2024-06-10 14:06:39.180518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.911 [2024-06-10 14:06:39.180767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.911 [2024-06-10 14:06:39.180782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.911 [2024-06-10 14:06:39.180795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.911 [2024-06-10 14:06:39.184525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.911 [2024-06-10 14:06:39.193780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.911 [2024-06-10 14:06:39.194372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.911 [2024-06-10 14:06:39.194396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.911 [2024-06-10 14:06:39.194410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.911 [2024-06-10 14:06:39.194657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.911 [2024-06-10 14:06:39.194894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.911 [2024-06-10 14:06:39.194909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.911 [2024-06-10 14:06:39.194922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.911 [2024-06-10 14:06:39.198661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.911 [2024-06-10 14:06:39.207903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.911 [2024-06-10 14:06:39.208503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.208555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.208602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.209070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.912 [2024-06-10 14:06:39.209313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.912 [2024-06-10 14:06:39.209328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.912 [2024-06-10 14:06:39.209341] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.912 [2024-06-10 14:06:39.213070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.912 [2024-06-10 14:06:39.222079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.912 [2024-06-10 14:06:39.222675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.222727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.222761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.223186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.912 [2024-06-10 14:06:39.223425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.912 [2024-06-10 14:06:39.223440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.912 [2024-06-10 14:06:39.223453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.912 [2024-06-10 14:06:39.227184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.912 [2024-06-10 14:06:39.236198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.912 [2024-06-10 14:06:39.236717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.236741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.236754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.236992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.912 [2024-06-10 14:06:39.237231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.912 [2024-06-10 14:06:39.237245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.912 [2024-06-10 14:06:39.237258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.912 [2024-06-10 14:06:39.240989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.912 [2024-06-10 14:06:39.250222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.912 [2024-06-10 14:06:39.250683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.250706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.250720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.250956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.912 [2024-06-10 14:06:39.251195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.912 [2024-06-10 14:06:39.251210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.912 [2024-06-10 14:06:39.251223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.912 [2024-06-10 14:06:39.254958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.912 [2024-06-10 14:06:39.264424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.912 [2024-06-10 14:06:39.265023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.265063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.265077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.265314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.912 [2024-06-10 14:06:39.265553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.912 [2024-06-10 14:06:39.265568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.912 [2024-06-10 14:06:39.265588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.912 [2024-06-10 14:06:39.269323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.912 [2024-06-10 14:06:39.278551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.912 [2024-06-10 14:06:39.279148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.279201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.279236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.279637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.912 [2024-06-10 14:06:39.279876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.912 [2024-06-10 14:06:39.279891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.912 [2024-06-10 14:06:39.279904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.912 [2024-06-10 14:06:39.283623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.912 [2024-06-10 14:06:39.292649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.912 [2024-06-10 14:06:39.293232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.912 [2024-06-10 14:06:39.293286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.912 [2024-06-10 14:06:39.293319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.912 [2024-06-10 14:06:39.293775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.913 [2024-06-10 14:06:39.294015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.913 [2024-06-10 14:06:39.294030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.913 [2024-06-10 14:06:39.294043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.913 [2024-06-10 14:06:39.297770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.913 [2024-06-10 14:06:39.306784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.913 [2024-06-10 14:06:39.307298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.913 [2024-06-10 14:06:39.307350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.913 [2024-06-10 14:06:39.307393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.913 [2024-06-10 14:06:39.307832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.913 [2024-06-10 14:06:39.308071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.913 [2024-06-10 14:06:39.308086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.913 [2024-06-10 14:06:39.308099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.913 [2024-06-10 14:06:39.311825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.913 [2024-06-10 14:06:39.320844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.913 [2024-06-10 14:06:39.321424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.913 [2024-06-10 14:06:39.321448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.913 [2024-06-10 14:06:39.321462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.913 [2024-06-10 14:06:39.321706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.913 [2024-06-10 14:06:39.321946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.913 [2024-06-10 14:06:39.321961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.913 [2024-06-10 14:06:39.321974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.913 [2024-06-10 14:06:39.325702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.913 [2024-06-10 14:06:39.334942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.913 [2024-06-10 14:06:39.335536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.913 [2024-06-10 14:06:39.335602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.913 [2024-06-10 14:06:39.335636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.913 [2024-06-10 14:06:39.336226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.913 [2024-06-10 14:06:39.336660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.913 [2024-06-10 14:06:39.336675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.913 [2024-06-10 14:06:39.336688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.913 [2024-06-10 14:06:39.340411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.913 [2024-06-10 14:06:39.348985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.913 [2024-06-10 14:06:39.349588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.913 [2024-06-10 14:06:39.349641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.913 [2024-06-10 14:06:39.349673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.913 [2024-06-10 14:06:39.350262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.913 [2024-06-10 14:06:39.350698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.913 [2024-06-10 14:06:39.350718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.913 [2024-06-10 14:06:39.350731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.913 [2024-06-10 14:06:39.354455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.913 [2024-06-10 14:06:39.363033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.913 [2024-06-10 14:06:39.363636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.913 [2024-06-10 14:06:39.363689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:24.913 [2024-06-10 14:06:39.363722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:24.913 [2024-06-10 14:06:39.364311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:24.913 [2024-06-10 14:06:39.364774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.913 [2024-06-10 14:06:39.364790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.913 [2024-06-10 14:06:39.364802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.913 [2024-06-10 14:06:39.368535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.172 [2024-06-10 14:06:39.377112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.172 [2024-06-10 14:06:39.377676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.172 [2024-06-10 14:06:39.377701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.172 [2024-06-10 14:06:39.377714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.172 [2024-06-10 14:06:39.377951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.172 [2024-06-10 14:06:39.378189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.172 [2024-06-10 14:06:39.378204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.378217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.381952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.391196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.391766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.391792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.391806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.392045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.392283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.392298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.392310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.396042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.405283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.405845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.405869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.405883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.406120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.406358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.406373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.406386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.410120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.419354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.419879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.419902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.419915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.420152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.420390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.420405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.420418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.424162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.433394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.433993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.434046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.434079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.434589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.434829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.434851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.434865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.438644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.447439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.448030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.448054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.448068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.448310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.448549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.448565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.448585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.452309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.461541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.462144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.462170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.462183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.462422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.462666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.462682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.462695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.466424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.475663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.476211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.476235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.476248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.476486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.476732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.476748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.476762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.480485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.489728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.490326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.490379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.490412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.490868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.491108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.491123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.491140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.494871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.503921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.504436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.504460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.504474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.504719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.504958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.504973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.173 [2024-06-10 14:06:39.504985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.173 [2024-06-10 14:06:39.508713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.173 [2024-06-10 14:06:39.517947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.173 [2024-06-10 14:06:39.518538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.173 [2024-06-10 14:06:39.518603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.173 [2024-06-10 14:06:39.518637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.173 [2024-06-10 14:06:39.519039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.173 [2024-06-10 14:06:39.519277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.173 [2024-06-10 14:06:39.519292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.519305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.523033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.532048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.532624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.532676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.532708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.533212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.533451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.533465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.533478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.537219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.546240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.546777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.546830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.546864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.547329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.547567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.547588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.547601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.551323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.560326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.560926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.560978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.561011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.561401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.561647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.561663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.561676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.565400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.574423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.574994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.575018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.575032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.575268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.575506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.575521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.575534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.579258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.588499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.589085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.589109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.589123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.589361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.589606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.589622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.589635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.593361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.602597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.603189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.603241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.603274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.603878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.604354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.604370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.604383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.608108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.616680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.617238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.617261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.617275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.617510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.617755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.617771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.617784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.621505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.174 [2024-06-10 14:06:39.631032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.174 [2024-06-10 14:06:39.631622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.174 [2024-06-10 14:06:39.631685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.174 [2024-06-10 14:06:39.631719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.174 [2024-06-10 14:06:39.632245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.174 [2024-06-10 14:06:39.632484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.174 [2024-06-10 14:06:39.632498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.174 [2024-06-10 14:06:39.632518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.174 [2024-06-10 14:06:39.636258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.433 [2024-06-10 14:06:39.645052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.433 [2024-06-10 14:06:39.645618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-06-10 14:06:39.645643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.433 [2024-06-10 14:06:39.645657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.433 [2024-06-10 14:06:39.645893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.433 [2024-06-10 14:06:39.646131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.433 [2024-06-10 14:06:39.646147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.433 [2024-06-10 14:06:39.646159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.433 [2024-06-10 14:06:39.649891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.433 [2024-06-10 14:06:39.659116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.433 [2024-06-10 14:06:39.659712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.433 [2024-06-10 14:06:39.659764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.659797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.660208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.660445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.660461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.660474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.664207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.673230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.673822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.673846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.673860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.674096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.674333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.674348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.674361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.678093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.687327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.687932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.687994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.688027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.688552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.688959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.688984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.689005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.695238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.702109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.702735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.702788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.702821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.703401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.703668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.703685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.703700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.707749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.716147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.716743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.716795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.716828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.717331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.717568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.717592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.717605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.721329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.730135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.730717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.730740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.730753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.730989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.731230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.731245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.731258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.734987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.744219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.744798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.744822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.744835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.745071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.745307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.745322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.745335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.749064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.758295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.758876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.758900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.758913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.759149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.759387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.759403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.759415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.763144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.772383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.772974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.772998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.773012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.773249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.773488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.773504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.773517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.777251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.786488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.787087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.787140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.434 [2024-06-10 14:06:39.787171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.434 [2024-06-10 14:06:39.787705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.434 [2024-06-10 14:06:39.787943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.434 [2024-06-10 14:06:39.787959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.434 [2024-06-10 14:06:39.787972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.434 [2024-06-10 14:06:39.791690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.434 [2024-06-10 14:06:39.800482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.434 [2024-06-10 14:06:39.801075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.434 [2024-06-10 14:06:39.801126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.801159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.801714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.801952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.801967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.801981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.805709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.814493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.815093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.815145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.815179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.815782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.816054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.816069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.816082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.819804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.828595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.829122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.829174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.829215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.829668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.830061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.830085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.830106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.836341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.843517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.844142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.844196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.844228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.844666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.844926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.844942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.844956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.849004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.857655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.858140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.858163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.858176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.858412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.858659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.858674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.858688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.862412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.871690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.872258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.872282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.872295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.872533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.872781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.872801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.872814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.876543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.885806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.886380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.886404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.886417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.886662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.886902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.886917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.886930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.435 [2024-06-10 14:06:39.890675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.435 [2024-06-10 14:06:39.899937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.435 [2024-06-10 14:06:39.900531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.435 [2024-06-10 14:06:39.900596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.435 [2024-06-10 14:06:39.900630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.435 [2024-06-10 14:06:39.901126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.435 [2024-06-10 14:06:39.901367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.435 [2024-06-10 14:06:39.901382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.435 [2024-06-10 14:06:39.901395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.905132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.913947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.914541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.914609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.914644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.914913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.915152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.915167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.915180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.918919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.927951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.928537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.928561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.928583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.928820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.929058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.929073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.929086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.932821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.942076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.942655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.942708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.942742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.943253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.943490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.943505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.943519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.947264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.956086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.956673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.956727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.956760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.957156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.957395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.957410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.957423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.961155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.970188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.970794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.970848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.970880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.971477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.972073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.972090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.972103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.975842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.984209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.984658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.984723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.984756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.985291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.985530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.985546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.985558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:39.989303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:39.998333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:39.998899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:39.998924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:39.998937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:39.999175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:39.999414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:39.999429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:39.999442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:40.003174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:40.012424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:40.013127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:40.013153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:40.013168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:40.013405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:40.013650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:40.013666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:40.013684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:40.017415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:40.026441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:40.026965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:40.026988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:40.027002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:40.027239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:40.027477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.695 [2024-06-10 14:06:40.027492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.695 [2024-06-10 14:06:40.027505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.695 [2024-06-10 14:06:40.031245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.695 [2024-06-10 14:06:40.040638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.695 [2024-06-10 14:06:40.041102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-06-10 14:06:40.041127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.695 [2024-06-10 14:06:40.041142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.695 [2024-06-10 14:06:40.041381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.695 [2024-06-10 14:06:40.041627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.041644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.041658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.045387] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.054644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.055141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.055165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.055180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.055416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.055661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.055677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.055690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.059421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.068684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.069181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.069205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.069219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.069456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.069705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.069721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.069734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.073463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.082724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.083269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.083293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.083308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.083546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.083792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.083809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.083822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.087563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.096838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.097400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.097425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.097439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.097683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.097921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.097937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.097949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.101683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.110928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.111481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.111533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.111567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.112178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.112664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.112697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.112719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.118960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.126072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.126666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.126692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.126706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.126964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.127223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.127240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.127254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.131311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.140289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.140891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.140918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.140933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.141173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.141410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.141426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.141439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.145175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.696 [2024-06-10 14:06:40.154618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.696 [2024-06-10 14:06:40.155129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-06-10 14:06:40.155182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.696 [2024-06-10 14:06:40.155215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.696 [2024-06-10 14:06:40.155720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.696 [2024-06-10 14:06:40.155960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.696 [2024-06-10 14:06:40.155975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.696 [2024-06-10 14:06:40.155988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.696 [2024-06-10 14:06:40.159726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.956 [2024-06-10 14:06:40.168772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.956 [2024-06-10 14:06:40.169279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.956 [2024-06-10 14:06:40.169339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.956 [2024-06-10 14:06:40.169371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.956 [2024-06-10 14:06:40.169975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.956 [2024-06-10 14:06:40.170567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.956 [2024-06-10 14:06:40.170602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.956 [2024-06-10 14:06:40.170615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.956 [2024-06-10 14:06:40.174338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.956 [2024-06-10 14:06:40.182938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.956 [2024-06-10 14:06:40.183452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.956 [2024-06-10 14:06:40.183476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.956 [2024-06-10 14:06:40.183489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.956 [2024-06-10 14:06:40.183731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.956 [2024-06-10 14:06:40.183969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.956 [2024-06-10 14:06:40.183984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.956 [2024-06-10 14:06:40.183997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.956 [2024-06-10 14:06:40.187734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.956 [2024-06-10 14:06:40.196990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.956 [2024-06-10 14:06:40.197565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.956 [2024-06-10 14:06:40.197630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.956 [2024-06-10 14:06:40.197664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.956 [2024-06-10 14:06:40.198251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.956 [2024-06-10 14:06:40.198731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.956 [2024-06-10 14:06:40.198747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.956 [2024-06-10 14:06:40.198760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.956 [2024-06-10 14:06:40.202490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.956 [2024-06-10 14:06:40.211074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.956 [2024-06-10 14:06:40.211616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.956 [2024-06-10 14:06:40.211643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.956 [2024-06-10 14:06:40.211657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.956 [2024-06-10 14:06:40.211894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.956 [2024-06-10 14:06:40.212132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.956 [2024-06-10 14:06:40.212148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.212161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.215896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.225152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.225707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.225730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.225744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.225981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.226219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.226235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.226247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.229987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.239251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.239846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.239869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.239883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.240120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.240358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.240373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.240386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.244116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.253370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.253874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.253899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.253913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.254150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.254393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.254409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.254422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.258155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.267394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.267928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.267982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.268015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.268560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.268812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.268827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.268840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.272566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.281594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.282163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.282215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.282248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.282679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.282917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.282932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.282945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.286680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.295708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.296312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.296365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.296398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.296914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.297154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.297169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.297182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.300915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.309732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.310351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.310375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.310389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.310634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.310873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.310888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.310901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.314636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.323880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.324455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.324507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.324540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.325001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.325241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.325256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.325269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.329006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.338034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.338616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.338671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.338705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.339171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.339410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.339425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.339438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.343172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.957 [2024-06-10 14:06:40.352196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.957 [2024-06-10 14:06:40.352727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.957 [2024-06-10 14:06:40.352780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.957 [2024-06-10 14:06:40.352836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.957 [2024-06-10 14:06:40.353072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.957 [2024-06-10 14:06:40.353309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.957 [2024-06-10 14:06:40.353325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.957 [2024-06-10 14:06:40.353338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.957 [2024-06-10 14:06:40.357074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.958 [2024-06-10 14:06:40.366318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.958 [2024-06-10 14:06:40.366829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.958 [2024-06-10 14:06:40.366882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.958 [2024-06-10 14:06:40.366916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.958 [2024-06-10 14:06:40.367503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.958 [2024-06-10 14:06:40.368072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.958 [2024-06-10 14:06:40.368088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.958 [2024-06-10 14:06:40.368102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.958 [2024-06-10 14:06:40.371852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.958 [2024-06-10 14:06:40.380435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.958 [2024-06-10 14:06:40.381001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.958 [2024-06-10 14:06:40.381025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.958 [2024-06-10 14:06:40.381039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.958 [2024-06-10 14:06:40.381276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.958 [2024-06-10 14:06:40.381514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.958 [2024-06-10 14:06:40.381529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.958 [2024-06-10 14:06:40.381542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.958 [2024-06-10 14:06:40.385277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.958 [2024-06-10 14:06:40.394528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.958 [2024-06-10 14:06:40.395102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.958 [2024-06-10 14:06:40.395126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.958 [2024-06-10 14:06:40.395140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.958 [2024-06-10 14:06:40.395378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.958 [2024-06-10 14:06:40.395624] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.958 [2024-06-10 14:06:40.395643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.958 [2024-06-10 14:06:40.395656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.958 [2024-06-10 14:06:40.399387] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.958 [2024-06-10 14:06:40.408632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.958 [2024-06-10 14:06:40.409154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.958 [2024-06-10 14:06:40.409205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.958 [2024-06-10 14:06:40.409238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.958 [2024-06-10 14:06:40.409841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.958 [2024-06-10 14:06:40.410410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.958 [2024-06-10 14:06:40.410426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.958 [2024-06-10 14:06:40.410439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:25.958 [2024-06-10 14:06:40.414170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:25.958 [2024-06-10 14:06:40.422763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:25.958 [2024-06-10 14:06:40.423274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.958 [2024-06-10 14:06:40.423328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:25.958 [2024-06-10 14:06:40.423362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:25.958 [2024-06-10 14:06:40.423963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:25.958 [2024-06-10 14:06:40.424484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:25.958 [2024-06-10 14:06:40.424499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:25.958 [2024-06-10 14:06:40.424512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.428244] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.436844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.437302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.437354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.437389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.437992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.438536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.438552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.438564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.442294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.450885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.451482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.451505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.451519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.451762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.452002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.452018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.452030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.455764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.465014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.465599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.465653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.465687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.466189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.466566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.466597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.466618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.472871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.479900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.480513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.480565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.480609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.481110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.481370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.481386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.481400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.485460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.493916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.494531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.494597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.494631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.495229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.495684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.495700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.495713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.499439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.508028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.508600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.508623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.508638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.508875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.509113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.509128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.509141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.512868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.522101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.522664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.522687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.522700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.522937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.523176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.523191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.523204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.526972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.536212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.536782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.536805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.536819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.537055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.537293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.537309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.537325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.541059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.550305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.550890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.550942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.550976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.551523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.551768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.551784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.551796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.555520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.218 [2024-06-10 14:06:40.564318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.218 [2024-06-10 14:06:40.564913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.218 [2024-06-10 14:06:40.564966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.218 [2024-06-10 14:06:40.564999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.218 [2024-06-10 14:06:40.565463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.218 [2024-06-10 14:06:40.565709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.218 [2024-06-10 14:06:40.565725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.218 [2024-06-10 14:06:40.565738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.218 [2024-06-10 14:06:40.569468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.578510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.579106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.579159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.579193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.579678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.579918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.579933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.579946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.583677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.592698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.593289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.593312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.593327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.593563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.593809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.593825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.593839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.597564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.606804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.607333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.607387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.607422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.608026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.608629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.608645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.608659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.612382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.620961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.621471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.621523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.621556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.622157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.622701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.622716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.622729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.626453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.635107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.635716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.635772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.635806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.636396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.636727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.636744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.636757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.640490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.649300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.649881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.649906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.649920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.650158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.650395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.650410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.650423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.654201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.663464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.663945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.663998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.664032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.664545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.664791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.664807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.664820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.668550] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.219 [2024-06-10 14:06:40.677590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.219 [2024-06-10 14:06:40.678175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.219 [2024-06-10 14:06:40.678198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.219 [2024-06-10 14:06:40.678212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.219 [2024-06-10 14:06:40.678449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.219 [2024-06-10 14:06:40.678696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.219 [2024-06-10 14:06:40.678712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.219 [2024-06-10 14:06:40.678725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.219 [2024-06-10 14:06:40.682454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.478 [2024-06-10 14:06:40.691706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.478 [2024-06-10 14:06:40.692222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.478 [2024-06-10 14:06:40.692245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.478 [2024-06-10 14:06:40.692259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.478 [2024-06-10 14:06:40.692495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.478 [2024-06-10 14:06:40.692743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.478 [2024-06-10 14:06:40.692759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.478 [2024-06-10 14:06:40.692772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.478 [2024-06-10 14:06:40.696498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.478 [2024-06-10 14:06:40.705749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.478 [2024-06-10 14:06:40.706334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.478 [2024-06-10 14:06:40.706358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.478 [2024-06-10 14:06:40.706372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.478 [2024-06-10 14:06:40.706616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.478 [2024-06-10 14:06:40.706856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.478 [2024-06-10 14:06:40.706872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.478 [2024-06-10 14:06:40.706885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.478 [2024-06-10 14:06:40.710617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.478 [2024-06-10 14:06:40.719868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.478 [2024-06-10 14:06:40.720431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.478 [2024-06-10 14:06:40.720455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.478 [2024-06-10 14:06:40.720468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.478 [2024-06-10 14:06:40.720715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.478 [2024-06-10 14:06:40.720953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.478 [2024-06-10 14:06:40.720969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.478 [2024-06-10 14:06:40.720982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.478 [2024-06-10 14:06:40.724707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.478 [2024-06-10 14:06:40.733949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.478 [2024-06-10 14:06:40.734533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.478 [2024-06-10 14:06:40.734557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.478 [2024-06-10 14:06:40.734585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.478 [2024-06-10 14:06:40.734821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.478 [2024-06-10 14:06:40.735059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.478 [2024-06-10 14:06:40.735074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.478 [2024-06-10 14:06:40.735087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.478 [2024-06-10 14:06:40.738825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.478 [2024-06-10 14:06:40.748069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.748498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.748522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.748536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.748781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.749021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.749035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.749048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.752779] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.762240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.762813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.762865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.762898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.763350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.763594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.763609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.763623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.767350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.776375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.776876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.776900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.776914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.777151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.777392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.777408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.777420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.781156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.790392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.790994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.791047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.791080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.791685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.792129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.792145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.792157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.795886] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.804458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.805046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.805069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.805083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.805321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.805559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.805574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.805595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.809321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.818552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.819012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.819035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.819049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.819286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.819525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.819540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.819552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.823282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.832738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.833324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.833375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.833408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.833963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.834202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.834217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.834230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.837962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.846758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.847323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.847376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.847409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.847996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.848234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.848249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.848262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.851987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.860777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.861365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.861415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.861448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.862054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.862294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.862309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.862322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.866047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.874845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.875429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.875453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.875470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.875715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.875953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.875968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.875981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.879707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.888945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.889532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.889556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.889570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.889813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.890052] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.890067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.890080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.893809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.903041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.903537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.903561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.903581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.903818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.904057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.904073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.904086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.907813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.917065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.917613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.917637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.917651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.917887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.918126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.918146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.918159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.921893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.931129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.931725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.931777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.931810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.932398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.932688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.932704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.932717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.479 [2024-06-10 14:06:40.936447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.479 [2024-06-10 14:06:40.945251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.479 [2024-06-10 14:06:40.945825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.479 [2024-06-10 14:06:40.945877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.479 [2024-06-10 14:06:40.945910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.479 [2024-06-10 14:06:40.946499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.479 [2024-06-10 14:06:40.946765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.479 [2024-06-10 14:06:40.946780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.479 [2024-06-10 14:06:40.946793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.739 [2024-06-10 14:06:40.950521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.739 [2024-06-10 14:06:40.959314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.739 [2024-06-10 14:06:40.959833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-06-10 14:06:40.959857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.739 [2024-06-10 14:06:40.959871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.739 [2024-06-10 14:06:40.960107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.739 [2024-06-10 14:06:40.960345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.739 [2024-06-10 14:06:40.960360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.739 [2024-06-10 14:06:40.960373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.739 [2024-06-10 14:06:40.964105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.739 [2024-06-10 14:06:40.973384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.739 [2024-06-10 14:06:40.973964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-06-10 14:06:40.974014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.739 [2024-06-10 14:06:40.974047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.739 [2024-06-10 14:06:40.974605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.739 [2024-06-10 14:06:40.974843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.739 [2024-06-10 14:06:40.974859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.739 [2024-06-10 14:06:40.974871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.739 [2024-06-10 14:06:40.978603] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.739 [2024-06-10 14:06:40.987396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.739 [2024-06-10 14:06:40.987920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.739 [2024-06-10 14:06:40.987943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.739 [2024-06-10 14:06:40.987956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.739 [2024-06-10 14:06:40.988193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.739 [2024-06-10 14:06:40.988431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.739 [2024-06-10 14:06:40.988445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.739 [2024-06-10 14:06:40.988459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.739 [2024-06-10 14:06:40.992190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.739 [2024-06-10 14:06:41.001425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.002013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.002036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.002050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.002286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.002524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.002539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.002552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.006283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.015515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.016110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.016162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.016195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.016692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.016932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.016947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.016960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.020686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.029708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.030305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.030358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.030391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.030767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.031007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.031022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.031035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.034768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.043826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.044291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.044315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.044330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.044567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.044813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.044828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.044841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.048563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.058013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.058564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.058593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.058608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.058844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.059082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.059097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.059116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.062848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.072093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.072687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.072741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.072775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.073363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.073976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.073992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.074005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.077736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.086083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.086672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.086725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.086758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.087144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.087386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.087403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.087415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.091150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.100162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.100756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.100809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.100842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.101431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.101774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.101791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.101804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.105527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.114320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.114916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.114976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.115009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.115610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.116015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.116030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.116042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.119769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.128339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.128862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.128885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.128899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.740 [2024-06-10 14:06:41.129136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.740 [2024-06-10 14:06:41.129375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.740 [2024-06-10 14:06:41.129390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.740 [2024-06-10 14:06:41.129403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.740 [2024-06-10 14:06:41.133133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.740 [2024-06-10 14:06:41.142374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.740 [2024-06-10 14:06:41.142972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.740 [2024-06-10 14:06:41.143026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.740 [2024-06-10 14:06:41.143059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.741 [2024-06-10 14:06:41.143458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.741 [2024-06-10 14:06:41.143704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.741 [2024-06-10 14:06:41.143719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.741 [2024-06-10 14:06:41.143732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.741 [2024-06-10 14:06:41.147457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.741 [2024-06-10 14:06:41.156478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.741 [2024-06-10 14:06:41.157064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-06-10 14:06:41.157088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.741 [2024-06-10 14:06:41.157102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.741 [2024-06-10 14:06:41.157340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.741 [2024-06-10 14:06:41.157588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.741 [2024-06-10 14:06:41.157604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.741 [2024-06-10 14:06:41.157616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.741 [2024-06-10 14:06:41.161341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.741 [2024-06-10 14:06:41.170588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.741 [2024-06-10 14:06:41.171075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-06-10 14:06:41.171122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.741 [2024-06-10 14:06:41.171155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.741 [2024-06-10 14:06:41.171748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.741 [2024-06-10 14:06:41.171987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.741 [2024-06-10 14:06:41.172002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.741 [2024-06-10 14:06:41.172015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.741 [2024-06-10 14:06:41.175738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.741 [2024-06-10 14:06:41.184680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.741 [2024-06-10 14:06:41.185257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-06-10 14:06:41.185310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.741 [2024-06-10 14:06:41.185342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.741 [2024-06-10 14:06:41.185854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.741 [2024-06-10 14:06:41.186092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.741 [2024-06-10 14:06:41.186107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.741 [2024-06-10 14:06:41.186120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.741 [2024-06-10 14:06:41.189852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:26.741 [2024-06-10 14:06:41.198882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:26.741 [2024-06-10 14:06:41.199444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.741 [2024-06-10 14:06:41.199468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:26.741 [2024-06-10 14:06:41.199481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:26.741 [2024-06-10 14:06:41.199723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:26.741 [2024-06-10 14:06:41.199961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:26.741 [2024-06-10 14:06:41.199976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:26.741 [2024-06-10 14:06:41.199989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:26.741 [2024-06-10 14:06:41.203723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.212964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.213545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.213568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.213588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.213825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.214063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.214078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.214090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.217821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.227048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.227627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.227651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.227665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.227908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.228147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.228162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.228175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.231905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.241146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.241657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.241681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.241695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.241934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.242172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.242187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.242200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.245935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.255177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.255761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.255785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.255803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.256040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.256277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.256292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.256305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.260041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.269277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.269831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.269855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.269869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.270106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.270344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.270359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.270372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.274107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.283338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.283913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.283936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.283950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.284187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.284425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.284440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.284453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.288186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.297426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.298020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.298044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.298057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.298295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.298533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.298552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.298566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.302294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.311531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.312117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.312140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.312155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.312392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.312635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.312651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.312663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.316390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.325637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.326230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.326282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.326315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.326922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.000 [2024-06-10 14:06:41.327443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.000 [2024-06-10 14:06:41.327458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.000 [2024-06-10 14:06:41.327471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.000 [2024-06-10 14:06:41.331202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.000 [2024-06-10 14:06:41.339791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.000 [2024-06-10 14:06:41.340359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-06-10 14:06:41.340382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.000 [2024-06-10 14:06:41.340397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.000 [2024-06-10 14:06:41.340639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.340877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.340892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.340905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.344638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.353879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.354477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.354530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.354563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.355167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.355678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.355703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.355724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.361956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.368969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.369581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.369640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.369674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.370257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.370516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.370532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.370546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.374610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.383124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.383719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.383772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.383806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.384321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.384559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.384580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.384593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.388323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.397136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.397716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.397769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.397802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.398399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.398749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.398765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.398779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.402504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.411294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.411879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.411903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.411916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.412153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.412390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.412405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.412418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.416152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.425386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.425978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.426001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.426015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.426252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.426490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.426505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.426519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.430249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.439482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.440092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.440149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.440184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.440653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.440891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.440906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.440923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.444648] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.453659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.454239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.454263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.454277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.454513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.454758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.454773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.454786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.001 [2024-06-10 14:06:41.458511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.001 [2024-06-10 14:06:41.467749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.001 [2024-06-10 14:06:41.468248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-06-10 14:06:41.468271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.001 [2024-06-10 14:06:41.468285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.001 [2024-06-10 14:06:41.468521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.001 [2024-06-10 14:06:41.468767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.001 [2024-06-10 14:06:41.468783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.001 [2024-06-10 14:06:41.468796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.260 [2024-06-10 14:06:41.472529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.260 [2024-06-10 14:06:41.481760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.260 [2024-06-10 14:06:41.482345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.260 [2024-06-10 14:06:41.482369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.260 [2024-06-10 14:06:41.482382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.260 [2024-06-10 14:06:41.482626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.260 [2024-06-10 14:06:41.482864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.260 [2024-06-10 14:06:41.482879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.260 [2024-06-10 14:06:41.482892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.260 [2024-06-10 14:06:41.486653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.260 [2024-06-10 14:06:41.495891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.260 [2024-06-10 14:06:41.496429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.260 [2024-06-10 14:06:41.496480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.260 [2024-06-10 14:06:41.496514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.260 [2024-06-10 14:06:41.497051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.260 [2024-06-10 14:06:41.497290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.260 [2024-06-10 14:06:41.497305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.260 [2024-06-10 14:06:41.497318] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.260 [2024-06-10 14:06:41.501049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.510071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.510564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.510592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.510606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.510844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.511082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.511097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.511110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.514837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.524067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.524654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.524707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.524741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.525151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.525389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.525404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.525417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.529147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.538172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.538725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.538749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.538762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.539004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.539243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.539258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.539271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.543006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.552243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.552732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.552756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.552771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.553008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.553247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.553262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.553275] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.557008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.566244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.566775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.566799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.566812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.567049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.567287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.567302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.567316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.571054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.580306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.580866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.580890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.580903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.581139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.581378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.581393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.581410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.585146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.594394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.594979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.595003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.595017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.595254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.595492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.595507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.595520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.599255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.608497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.609012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.609035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.609049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.609286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.609524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.609539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.609552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.613300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.622550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.623096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.261 [2024-06-10 14:06:41.623120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.261 [2024-06-10 14:06:41.623134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.261 [2024-06-10 14:06:41.623372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.261 [2024-06-10 14:06:41.623616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.261 [2024-06-10 14:06:41.623632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.261 [2024-06-10 14:06:41.623645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.261 [2024-06-10 14:06:41.627657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.261 [2024-06-10 14:06:41.636684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.261 [2024-06-10 14:06:41.637224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.637253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.637267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.637504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.637752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.637768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.637781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.641514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.262 [2024-06-10 14:06:41.650764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.262 [2024-06-10 14:06:41.651276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.651329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.651363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.651966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.652434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.652449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.652462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.656192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.262 [2024-06-10 14:06:41.664772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.262 [2024-06-10 14:06:41.665284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.665308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.665321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.665558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.665804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.665820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.665832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.669557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.262 [2024-06-10 14:06:41.678817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.262 [2024-06-10 14:06:41.679264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.679288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.679301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.679538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.679785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.679802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.679815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.683544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.262 [2024-06-10 14:06:41.693020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.262 [2024-06-10 14:06:41.693516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.693565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.693614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.694174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.694414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.694429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.694442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.698175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.262 [2024-06-10 14:06:41.707198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.262 [2024-06-10 14:06:41.707701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.707725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.707738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.707975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.708214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.708229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.708242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.711977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.262 [2024-06-10 14:06:41.721218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.262 [2024-06-10 14:06:41.721807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.262 [2024-06-10 14:06:41.721830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.262 [2024-06-10 14:06:41.721844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.262 [2024-06-10 14:06:41.722082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.262 [2024-06-10 14:06:41.722320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.262 [2024-06-10 14:06:41.722335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.262 [2024-06-10 14:06:41.722348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.262 [2024-06-10 14:06:41.726081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.735326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.735779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.735802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.735816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.736053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.736291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.736306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.736319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.740058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.749525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.750085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.750138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.750171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.750672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.750911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.750926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.750939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.754669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.763693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.764292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.764315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.764328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.764565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.764810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.764825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.764838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.768569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.777823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.778317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.778341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.778361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.778604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.778843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.778859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.778871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.782602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.791855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.792348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.792372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.792386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.792630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.792868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.792884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.792897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.796628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.805873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.806442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.806494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.806528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.807129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.807627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.807643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.807656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.811379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.819969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.820465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.820488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.820501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.820744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.820982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.821001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.821014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.824745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.833985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.834574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.834638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.834671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.835260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.835770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.835786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.835799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.522 [2024-06-10 14:06:41.839529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.522 [2024-06-10 14:06:41.848106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.522 [2024-06-10 14:06:41.848677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.522 [2024-06-10 14:06:41.848731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.522 [2024-06-10 14:06:41.848764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.522 [2024-06-10 14:06:41.849353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.522 [2024-06-10 14:06:41.849655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.522 [2024-06-10 14:06:41.849671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.522 [2024-06-10 14:06:41.849684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.853411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.862214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.862810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.862863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.862896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.863485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.863949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.863965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.863978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.867712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.876307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.876882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.876935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.876969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.877559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.877805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.877821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.877834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.881562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.890370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.890975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.891029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.891062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.891667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.892202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.892226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.892247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.898483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.905591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.906131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.906156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.906170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.906427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.906695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.906711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.906725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.910778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.919742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.920260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.920283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.920297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.920537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.920783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.920798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.920811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.924539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.933848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.934368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.934392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.934407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.934652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.934891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.934907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.934921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.938659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.947903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.948280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.948304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.948317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.948554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.948800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.948816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.948829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.952554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.962022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.962537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.962560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.962574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.962816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.963055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.963070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.963086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.966816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.976066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.976585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.976609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.976623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.976859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.977097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.977112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.977125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.523 [2024-06-10 14:06:41.980852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.523 [2024-06-10 14:06:41.990098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.523 [2024-06-10 14:06:41.990659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.523 [2024-06-10 14:06:41.990683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.523 [2024-06-10 14:06:41.990696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.523 [2024-06-10 14:06:41.990934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.523 [2024-06-10 14:06:41.991172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.523 [2024-06-10 14:06:41.991187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.523 [2024-06-10 14:06:41.991200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:41.994933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.004137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.004702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.004727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.004741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.783 [2024-06-10 14:06:42.004978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.783 [2024-06-10 14:06:42.005215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.783 [2024-06-10 14:06:42.005230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.783 [2024-06-10 14:06:42.005243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:42.008976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.018217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.018711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.018734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.018748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.783 [2024-06-10 14:06:42.018984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.783 [2024-06-10 14:06:42.019223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.783 [2024-06-10 14:06:42.019238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.783 [2024-06-10 14:06:42.019250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:42.022983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.032227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.032807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.032831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.032845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.783 [2024-06-10 14:06:42.033082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.783 [2024-06-10 14:06:42.033320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.783 [2024-06-10 14:06:42.033335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.783 [2024-06-10 14:06:42.033348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:42.037081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.046334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.046922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.046946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.046960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.783 [2024-06-10 14:06:42.047198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.783 [2024-06-10 14:06:42.047436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.783 [2024-06-10 14:06:42.047451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.783 [2024-06-10 14:06:42.047464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:42.051195] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.060433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.061029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.061053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.061066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.783 [2024-06-10 14:06:42.061307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.783 [2024-06-10 14:06:42.061546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.783 [2024-06-10 14:06:42.061561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.783 [2024-06-10 14:06:42.061574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:42.065302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.074549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.075135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.075192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.075226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.783 [2024-06-10 14:06:42.075827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.783 [2024-06-10 14:06:42.076400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.783 [2024-06-10 14:06:42.076415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.783 [2024-06-10 14:06:42.076427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.783 [2024-06-10 14:06:42.080159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.783 [2024-06-10 14:06:42.088740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.783 [2024-06-10 14:06:42.089341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.783 [2024-06-10 14:06:42.089391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.783 [2024-06-10 14:06:42.089426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.784 [2024-06-10 14:06:42.090017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.784 [2024-06-10 14:06:42.090256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.784 [2024-06-10 14:06:42.090270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.784 [2024-06-10 14:06:42.090283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.784 [2024-06-10 14:06:42.094013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.784 [2024-06-10 14:06:42.102813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.784 [2024-06-10 14:06:42.103408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.784 [2024-06-10 14:06:42.103460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.784 [2024-06-10 14:06:42.103492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.784 [2024-06-10 14:06:42.104097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.784 [2024-06-10 14:06:42.104571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.784 [2024-06-10 14:06:42.104590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.784 [2024-06-10 14:06:42.104607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.784 [2024-06-10 14:06:42.108331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.784 [2024-06-10 14:06:42.116906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.784 [2024-06-10 14:06:42.117487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.784 [2024-06-10 14:06:42.117510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.784 [2024-06-10 14:06:42.117524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.784 [2024-06-10 14:06:42.117766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.784 [2024-06-10 14:06:42.118005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.784 [2024-06-10 14:06:42.118020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.784 [2024-06-10 14:06:42.118033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1641983 Killed "${NVMF_APP[@]}" "$@" 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:27.784 [2024-06-10 14:06:42.121763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:27.784 [2024-06-10 14:06:42.131000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.784 [2024-06-10 14:06:42.131493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.784 [2024-06-10 14:06:42.131516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.784 [2024-06-10 14:06:42.131529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1643591 00:38:27.784 [2024-06-10 14:06:42.131772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1643591 00:38:27.784 [2024-06-10 14:06:42.132011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.784 [2024-06-10 14:06:42.132026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.784 [2024-06-10 14:06:42.132039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1643591 ']' 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:27.784 14:06:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:27.784 [2024-06-10 14:06:42.135766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.784 [2024-06-10 14:06:42.145015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.784 [2024-06-10 14:06:42.145583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.784 [2024-06-10 14:06:42.145607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.784 [2024-06-10 14:06:42.145622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.784 [2024-06-10 14:06:42.145861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.784 [2024-06-10 14:06:42.146100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.784 [2024-06-10 14:06:42.146115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.784 [2024-06-10 14:06:42.146128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.784 [2024-06-10 14:06:42.149857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.784 [2024-06-10 14:06:42.159097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.784 [2024-06-10 14:06:42.159663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.784 [2024-06-10 14:06:42.159686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.784 [2024-06-10 14:06:42.159699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.784 [2024-06-10 14:06:42.159936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.784 [2024-06-10 14:06:42.160173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.784 [2024-06-10 14:06:42.160189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.784 [2024-06-10 14:06:42.160201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.784 [2024-06-10 14:06:42.163933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.784 [2024-06-10 14:06:42.173190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.785 [2024-06-10 14:06:42.173772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.785 [2024-06-10 14:06:42.173796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.785 [2024-06-10 14:06:42.173809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.785 [2024-06-10 14:06:42.174047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.785 [2024-06-10 14:06:42.174284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.785 [2024-06-10 14:06:42.174299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.785 [2024-06-10 14:06:42.174312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.785 [2024-06-10 14:06:42.178048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.785 [2024-06-10 14:06:42.185202] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:27.785 [2024-06-10 14:06:42.185259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.785 [2024-06-10 14:06:42.187293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.785 [2024-06-10 14:06:42.187780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.785 [2024-06-10 14:06:42.187803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.785 [2024-06-10 14:06:42.187817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.785 [2024-06-10 14:06:42.188054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.785 [2024-06-10 14:06:42.188291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.785 [2024-06-10 14:06:42.188306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.785 [2024-06-10 14:06:42.188319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.785 [2024-06-10 14:06:42.192052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.785 [2024-06-10 14:06:42.201292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.785 [2024-06-10 14:06:42.201875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.785 [2024-06-10 14:06:42.201898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.785 [2024-06-10 14:06:42.201912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.785 [2024-06-10 14:06:42.202150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.785 [2024-06-10 14:06:42.202388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.785 [2024-06-10 14:06:42.202402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.785 [2024-06-10 14:06:42.202416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.785 [2024-06-10 14:06:42.206297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.785 [2024-06-10 14:06:42.215327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.785 [2024-06-10 14:06:42.215782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.785 [2024-06-10 14:06:42.215807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.785 [2024-06-10 14:06:42.215822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.785 [2024-06-10 14:06:42.216060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.785 [2024-06-10 14:06:42.216298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.785 [2024-06-10 14:06:42.216313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.785 [2024-06-10 14:06:42.216326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.785 [2024-06-10 14:06:42.220059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.785 [2024-06-10 14:06:42.229527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.785 [2024-06-10 14:06:42.230115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.785 [2024-06-10 14:06:42.230139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.785 [2024-06-10 14:06:42.230157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.785 [2024-06-10 14:06:42.230393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.785 [2024-06-10 14:06:42.230638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.785 [2024-06-10 14:06:42.230654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.785 [2024-06-10 14:06:42.230667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.785 [2024-06-10 14:06:42.234392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.785 [2024-06-10 14:06:42.243639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:27.785 [2024-06-10 14:06:42.244125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.785 [2024-06-10 14:06:42.244148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:27.785 [2024-06-10 14:06:42.244162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:27.785 [2024-06-10 14:06:42.244399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:27.785 [2024-06-10 14:06:42.244645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.785 [2024-06-10 14:06:42.244660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.785 [2024-06-10 14:06:42.244673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.785 EAL: No free 2048 kB hugepages reported on node 1 00:38:27.785 [2024-06-10 14:06:42.248402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.257649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.258233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.258256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.258270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.045 [2024-06-10 14:06:42.258507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.045 [2024-06-10 14:06:42.258750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.045 [2024-06-10 14:06:42.258766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.045 [2024-06-10 14:06:42.258778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.045 [2024-06-10 14:06:42.262501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.271746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.272326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.272349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.272363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.045 [2024-06-10 14:06:42.272609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.045 [2024-06-10 14:06:42.272848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.045 [2024-06-10 14:06:42.272867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.045 [2024-06-10 14:06:42.272880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.045 [2024-06-10 14:06:42.276609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.285846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.286409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.286432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.286446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.045 [2024-06-10 14:06:42.286689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.045 [2024-06-10 14:06:42.286927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.045 [2024-06-10 14:06:42.286942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.045 [2024-06-10 14:06:42.286955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.045 [2024-06-10 14:06:42.290689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.299917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.300421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.300444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.300458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.045 [2024-06-10 14:06:42.300699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.045 [2024-06-10 14:06:42.300938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.045 [2024-06-10 14:06:42.300952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.045 [2024-06-10 14:06:42.300965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.045 [2024-06-10 14:06:42.304694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.305045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:28.045 [2024-06-10 14:06:42.313939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.314452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.314477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.314492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.045 [2024-06-10 14:06:42.314735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.045 [2024-06-10 14:06:42.314974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.045 [2024-06-10 14:06:42.314989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.045 [2024-06-10 14:06:42.315002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.045 [2024-06-10 14:06:42.318731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.327971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.328531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.328554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.328568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.045 [2024-06-10 14:06:42.328814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.045 [2024-06-10 14:06:42.329054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.045 [2024-06-10 14:06:42.329068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.045 [2024-06-10 14:06:42.329081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.045 [2024-06-10 14:06:42.332808] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.045 [2024-06-10 14:06:42.342054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.045 [2024-06-10 14:06:42.342579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.045 [2024-06-10 14:06:42.342604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.045 [2024-06-10 14:06:42.342618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.342855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.343094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.343109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.343122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.346852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.356105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.356724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.356752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.356766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.357006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.357245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.357260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.357273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.361010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.370244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.370814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.370837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.370857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.371094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.371333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.371347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.371360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.375105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.384359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.384923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.384947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.384961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.385198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.385435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.385450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.385463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.389200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.391001] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.046 [2024-06-10 14:06:42.391033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.046 [2024-06-10 14:06:42.391047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.046 [2024-06-10 14:06:42.391059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.046 [2024-06-10 14:06:42.391069] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.046 [2024-06-10 14:06:42.391152] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.046 [2024-06-10 14:06:42.391257] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.046 [2024-06-10 14:06:42.391257] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:38:28.046 [2024-06-10 14:06:42.398453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.399032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.399058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.399072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.399311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.399549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.399564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.399582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.403326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.412574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.413182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.413208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.413222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.413459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.413702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.413717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.413731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.417453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.426699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.427332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.427358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.427373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.427617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.427855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.427870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.427883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.431615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.440861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.441481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.441506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.441521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.441765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.046 [2024-06-10 14:06:42.442004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.046 [2024-06-10 14:06:42.442018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.046 [2024-06-10 14:06:42.442032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.046 [2024-06-10 14:06:42.445806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.046 [2024-06-10 14:06:42.455062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.046 [2024-06-10 14:06:42.455651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.046 [2024-06-10 14:06:42.455677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.046 [2024-06-10 14:06:42.455697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.046 [2024-06-10 14:06:42.455934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.047 [2024-06-10 14:06:42.456172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.047 [2024-06-10 14:06:42.456187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.047 [2024-06-10 14:06:42.456200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.047 [2024-06-10 14:06:42.459934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.047 [2024-06-10 14:06:42.469170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.047 [2024-06-10 14:06:42.469675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.047 [2024-06-10 14:06:42.469697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.047 [2024-06-10 14:06:42.469711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.047 [2024-06-10 14:06:42.469947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.047 [2024-06-10 14:06:42.470184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.047 [2024-06-10 14:06:42.470199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.047 [2024-06-10 14:06:42.470212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.047 [2024-06-10 14:06:42.473961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.047 [2024-06-10 14:06:42.483211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.047 [2024-06-10 14:06:42.483815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.047 [2024-06-10 14:06:42.483838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.047 [2024-06-10 14:06:42.483851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.047 [2024-06-10 14:06:42.484087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.047 [2024-06-10 14:06:42.484323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.047 [2024-06-10 14:06:42.484338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.047 [2024-06-10 14:06:42.484350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.047 [2024-06-10 14:06:42.488078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.047 [2024-06-10 14:06:42.497318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.047 [2024-06-10 14:06:42.497878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.047 [2024-06-10 14:06:42.497901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.047 [2024-06-10 14:06:42.497915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.047 [2024-06-10 14:06:42.498151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.047 [2024-06-10 14:06:42.498388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.047 [2024-06-10 14:06:42.498406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.047 [2024-06-10 14:06:42.498418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.047 [2024-06-10 14:06:42.502145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.047 [2024-06-10 14:06:42.511373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.047 [2024-06-10 14:06:42.511960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.047 [2024-06-10 14:06:42.511983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.047 [2024-06-10 14:06:42.511996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.047 [2024-06-10 14:06:42.512233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.047 [2024-06-10 14:06:42.512470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.047 [2024-06-10 14:06:42.512484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.047 [2024-06-10 14:06:42.512496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.306 [2024-06-10 14:06:42.516225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.306 [2024-06-10 14:06:42.525457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.306 [2024-06-10 14:06:42.526045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.306 [2024-06-10 14:06:42.526068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.306 [2024-06-10 14:06:42.526081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.306 [2024-06-10 14:06:42.526317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.306 [2024-06-10 14:06:42.526554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.306 [2024-06-10 14:06:42.526568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.306 [2024-06-10 14:06:42.526586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.306 [2024-06-10 14:06:42.530310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.306 [2024-06-10 14:06:42.539542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.306 [2024-06-10 14:06:42.540094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.306 [2024-06-10 14:06:42.540117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.306 [2024-06-10 14:06:42.540131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.306 [2024-06-10 14:06:42.540368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.306 [2024-06-10 14:06:42.540609] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.306 [2024-06-10 14:06:42.540624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.306 [2024-06-10 14:06:42.540637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.306 [2024-06-10 14:06:42.544357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.306 [2024-06-10 14:06:42.553591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.306 [2024-06-10 14:06:42.554139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.306 [2024-06-10 14:06:42.554161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.306 [2024-06-10 14:06:42.554175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.306 [2024-06-10 14:06:42.554409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.306 [2024-06-10 14:06:42.554651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.306 [2024-06-10 14:06:42.554665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.306 [2024-06-10 14:06:42.554678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.306 [2024-06-10 14:06:42.558398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.306 [2024-06-10 14:06:42.567640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.306 [2024-06-10 14:06:42.568193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.306 [2024-06-10 14:06:42.568215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.306 [2024-06-10 14:06:42.568229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.306 [2024-06-10 14:06:42.568463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.568707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.568721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.568734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.572540] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.581778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.582356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.582380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.582393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.582634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.582872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.582886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.582898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.586626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.595861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.596358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.596381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.596395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.596640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.596878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.596892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.596905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.600627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.609854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.610417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.610440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.610454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.610694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.610933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.610947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.610959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.614685] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.623916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.624477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.624500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.624514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.624754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.624991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.625005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.625018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.629036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.638057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.638626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.638650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.638664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.638900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.639138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.639152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.639170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.642900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.652132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.652700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.652724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.652737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.652974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.653211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.653225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.653238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.656966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.666191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.666750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.666772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.666786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.667021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.667259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.667273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.667285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.671015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.680251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.680820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.680843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.680856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.681092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.681329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.681343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.681356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.307 [2024-06-10 14:06:42.685083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.307 [2024-06-10 14:06:42.694326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.307 [2024-06-10 14:06:42.694897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.307 [2024-06-10 14:06:42.694925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.307 [2024-06-10 14:06:42.694938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.307 [2024-06-10 14:06:42.695175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.307 [2024-06-10 14:06:42.695410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.307 [2024-06-10 14:06:42.695424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.307 [2024-06-10 14:06:42.695437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.308 [2024-06-10 14:06:42.699164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.308 [2024-06-10 14:06:42.708389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.308 [2024-06-10 14:06:42.708962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.308 [2024-06-10 14:06:42.708985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.308 [2024-06-10 14:06:42.708998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.308 [2024-06-10 14:06:42.709234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.308 [2024-06-10 14:06:42.709472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.308 [2024-06-10 14:06:42.709486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.308 [2024-06-10 14:06:42.709498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.308 [2024-06-10 14:06:42.713222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.308 [2024-06-10 14:06:42.722449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.308 [2024-06-10 14:06:42.722941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.308 [2024-06-10 14:06:42.722964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.308 [2024-06-10 14:06:42.722977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.308 [2024-06-10 14:06:42.723213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.308 [2024-06-10 14:06:42.723450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.308 [2024-06-10 14:06:42.723464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.308 [2024-06-10 14:06:42.723477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.308 [2024-06-10 14:06:42.727201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.308 [2024-06-10 14:06:42.736651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.308 [2024-06-10 14:06:42.737211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.308 [2024-06-10 14:06:42.737233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.308 [2024-06-10 14:06:42.737247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.308 [2024-06-10 14:06:42.737483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.308 [2024-06-10 14:06:42.737730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.308 [2024-06-10 14:06:42.737745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.308 [2024-06-10 14:06:42.737758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.308 [2024-06-10 14:06:42.741486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.308 [2024-06-10 14:06:42.750709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.308 [2024-06-10 14:06:42.751293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.308 [2024-06-10 14:06:42.751316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.308 [2024-06-10 14:06:42.751329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.308 [2024-06-10 14:06:42.751565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.308 [2024-06-10 14:06:42.751810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.308 [2024-06-10 14:06:42.751824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.308 [2024-06-10 14:06:42.751836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.308 [2024-06-10 14:06:42.755562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.308 [2024-06-10 14:06:42.764795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.308 [2024-06-10 14:06:42.765358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.308 [2024-06-10 14:06:42.765380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.308 [2024-06-10 14:06:42.765393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.308 [2024-06-10 14:06:42.765635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.308 [2024-06-10 14:06:42.765872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.308 [2024-06-10 14:06:42.765886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.308 [2024-06-10 14:06:42.765899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.308 [2024-06-10 14:06:42.769624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.567 [2024-06-10 14:06:42.778861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.779422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.779445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.779458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.779699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.779937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.779951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.779964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.783694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.792935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.793503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.793526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.793540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.793781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.794018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.794033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.794046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.797770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.807000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.807560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.807587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.807601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.807838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.808075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.808090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.808102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.811828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.821056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.821615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.821638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.821652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.821888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.822124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.822138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.822150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.825876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.835104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.835599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.835622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.835639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.835876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.836112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.836126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.836138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.839869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.849103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.849669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.849692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.849705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.849941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.850179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.850193] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.850206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.853933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.863163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.863752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.863775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.863789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.864026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.864263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.864278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.864291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.868016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.877256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.877821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.877844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.877859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.878095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.878332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.878350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.878363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.882094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.891346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.891929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.891953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.891968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.892204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.892441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.892455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.892467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.896196] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.568 [2024-06-10 14:06:42.905436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.568 [2024-06-10 14:06:42.906029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.568 [2024-06-10 14:06:42.906056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.568 [2024-06-10 14:06:42.906070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.568 [2024-06-10 14:06:42.906311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.568 [2024-06-10 14:06:42.906549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.568 [2024-06-10 14:06:42.906563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.568 [2024-06-10 14:06:42.906580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.568 [2024-06-10 14:06:42.910303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:42.919537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:42.920126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:42.920149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:42.920162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:42.920398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:42.920641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:42.920655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:42.920668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:42.924392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:42.933632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:42.934216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:42.934240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:42.934254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:42.934490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:42.934734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:42.934750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:42.934762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:42.938486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:42.947726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:42.948289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:42.948312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:42.948326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:42.948561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:42.948806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:42.948822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:42.948834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:42.952558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:42.961844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:42.962368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:42.962392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:42.962406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:42.962648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:42.962887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:42.962901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:42.962913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:42.966646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:42.975896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:42.976405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:42.976428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:42.976441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:42.976687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:42.976925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:42.976939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:42.976952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:42.980682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:42.989918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:42.990503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:42.990526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:42.990540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:42.990782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:42.991021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:42.991035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:42.991047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:42.994773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:43.004016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:43.004599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:43.004623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:43.004636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:43.004873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:43.005111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:43.005125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:43.005137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:43.008865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:43.018122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:43.018708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:43.018731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:43.018745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:43.018981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:43.019218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:43.019233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:43.019250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:43.022979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.569 [2024-06-10 14:06:43.032205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.569 [2024-06-10 14:06:43.032791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.569 [2024-06-10 14:06:43.032815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.569 [2024-06-10 14:06:43.032828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.569 [2024-06-10 14:06:43.033064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.569 [2024-06-10 14:06:43.033301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.569 [2024-06-10 14:06:43.033317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.569 [2024-06-10 14:06:43.033329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.569 [2024-06-10 14:06:43.037062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.828 [2024-06-10 14:06:43.046311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.828 [2024-06-10 14:06:43.046902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.046925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.046939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.047176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.047414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.047428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.047442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.051171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.060409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 [2024-06-10 14:06:43.060996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.061019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.061033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.061268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.061505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.061519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.061532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.065260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.074497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 [2024-06-10 14:06:43.075075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.075098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.075112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.075347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.075590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.075605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.075618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.079339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.088618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 [2024-06-10 14:06:43.089136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.089159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.089173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.089409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.089653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.089668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.089680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.093409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:28.829 [2024-06-10 14:06:43.102650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 [2024-06-10 14:06:43.103222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.103246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.103260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.103496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.103739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.103755] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.103768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.107491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.116733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 [2024-06-10 14:06:43.117239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.117261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.117275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.117510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.117758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.117773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.117787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.121512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.130752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 [2024-06-10 14:06:43.131199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.131222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.131236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 [2024-06-10 14:06:43.131471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.131718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 [2024-06-10 14:06:43.131733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.131745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 [2024-06-10 14:06:43.135468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.144934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.829 [2024-06-10 14:06:43.145461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.145484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.145497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:28.829 [2024-06-10 14:06:43.145739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.829 [2024-06-10 14:06:43.145979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.829 [2024-06-10 14:06:43.145994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.829 [2024-06-10 14:06:43.146010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:28.829 [2024-06-10 14:06:43.149744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.829 [2024-06-10 14:06:43.152819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.829 [2024-06-10 14:06:43.158978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:28.829 [2024-06-10 14:06:43.159562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.829 [2024-06-10 14:06:43.159591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.829 [2024-06-10 14:06:43.159605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.829 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.830 [2024-06-10 14:06:43.159842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:28.830 [2024-06-10 14:06:43.160080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.830 [2024-06-10 14:06:43.160095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.830 [2024-06-10 14:06:43.160108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.830 [2024-06-10 14:06:43.163848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.830 [2024-06-10 14:06:43.173096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.830 [2024-06-10 14:06:43.173677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.830 [2024-06-10 14:06:43.173701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.830 [2024-06-10 14:06:43.173714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.830 [2024-06-10 14:06:43.173951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.830 [2024-06-10 14:06:43.174188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.830 [2024-06-10 14:06:43.174202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.830 [2024-06-10 14:06:43.174215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.830 [2024-06-10 14:06:43.177952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.830 [2024-06-10 14:06:43.187200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.830 [2024-06-10 14:06:43.187783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.830 [2024-06-10 14:06:43.187808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.830 [2024-06-10 14:06:43.187821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.830 [2024-06-10 14:06:43.188058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.830 [2024-06-10 14:06:43.188295] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.830 [2024-06-10 14:06:43.188309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.830 [2024-06-10 14:06:43.188322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.830 [2024-06-10 14:06:43.192064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.830 Malloc0 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:28.830 [2024-06-10 14:06:43.201308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.830 [2024-06-10 14:06:43.201901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.830 [2024-06-10 14:06:43.201924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.830 [2024-06-10 14:06:43.201938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.830 [2024-06-10 14:06:43.202174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.830 [2024-06-10 14:06:43.202412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.830 [2024-06-10 14:06:43.202426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.830 [2024-06-10 14:06:43.202438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.830 [2024-06-10 14:06:43.206169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:28.830 [2024-06-10 14:06:43.215409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.830 [2024-06-10 14:06:43.215929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.830 [2024-06-10 14:06:43.215951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5820 with addr=10.0.0.2, port=4420 00:38:28.830 [2024-06-10 14:06:43.215965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5820 is same with the state(5) to be set 00:38:28.830 [2024-06-10 14:06:43.216201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5820 (9): Bad file descriptor 00:38:28.830 [2024-06-10 14:06:43.216438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:28.830 [2024-06-10 14:06:43.216452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:28.830 [2024-06-10 14:06:43.216465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:28.830 [2024-06-10 14:06:43.217634] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.830 [2024-06-10 14:06:43.220197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.830 14:06:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1642530 00:38:28.830 [2024-06-10 14:06:43.229433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.830 [2024-06-10 14:06:43.261945] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:38.796 00:38:38.796 Latency(us) 00:38:38.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.796 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:38.796 Verification LBA range: start 0x0 length 0x4000 00:38:38.796 Nvme1n1 : 15.00 6066.09 23.70 9473.14 0.00 8210.70 851.97 26633.83 00:38:38.796 =================================================================================================================== 00:38:38.796 Total : 6066.09 23.70 9473.14 0.00 8210.70 851.97 26633.83 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:38.796 rmmod nvme_tcp 00:38:38.796 rmmod nvme_fabrics 00:38:38.796 rmmod nvme_keyring 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1643591 ']' 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1643591 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1643591 ']' 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1643591 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1643591 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1643591' 00:38:38.796 killing process with pid 1643591 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1643591 00:38:38.796 14:06:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1643591 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:38.796 14:06:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.174 14:06:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:40.174 00:38:40.174 real 0m29.845s 00:38:40.174 user 1m3.504s 00:38:40.174 sys 0m9.916s 00:38:40.174 14:06:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:40.174 14:06:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:40.174 ************************************ 00:38:40.174 END TEST nvmf_bdevperf 00:38:40.174 ************************************ 00:38:40.174 14:06:54 nvmf_tcp -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:40.174 14:06:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:40.174 14:06:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:40.174 14:06:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:40.174 ************************************ 00:38:40.174 START TEST nvmf_target_disconnect 00:38:40.174 ************************************ 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:40.174 * Looking for test storage... 00:38:40.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.174 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:38:40.175 14:06:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:38:48.286 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:48.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:48.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:48.287 Found net devices under 0000:af:00.0: cvl_0_0 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:48.287 Found net devices under 0000:af:00.1: cvl_0_1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:48.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:38:48.287 00:38:48.287 --- 10.0.0.2 ping statistics --- 00:38:48.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.287 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:38:48.287 00:38:48.287 --- 10.0.0.1 ping statistics --- 00:38:48.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.287 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:48.287 ************************************ 00:38:48.287 START TEST nvmf_target_disconnect_tc1 00:38:48.287 ************************************ 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:48.287 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:48.288 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.288 [2024-06-10 14:07:02.630630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:48.288 [2024-06-10 14:07:02.630687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa02ec0 with addr=10.0.0.2, port=4420 00:38:48.288 [2024-06-10 14:07:02.630714] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:48.288 [2024-06-10 14:07:02.630729] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:48.288 [2024-06-10 14:07:02.630746] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:38:48.288 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:48.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:48.288 Initializing NVMe Controllers 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:48.288 00:38:48.288 real 0m0.167s 00:38:48.288 user 0m0.060s 00:38:48.288 sys 0m0.106s 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:48.288 ************************************ 00:38:48.288 END TEST nvmf_target_disconnect_tc1 00:38:48.288 ************************************ 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:48.288 ************************************ 00:38:48.288 START TEST nvmf_target_disconnect_tc2 00:38:48.288 ************************************ 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1649415 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1649415 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1649415 ']' 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:48.288 14:07:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:48.546 [2024-06-10 14:07:02.792685] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:48.546 [2024-06-10 14:07:02.792743] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.546 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.546 [2024-06-10 14:07:02.921547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:48.546 [2024-06-10 14:07:03.006671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.546 [2024-06-10 14:07:03.006718] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.546 [2024-06-10 14:07:03.006732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.546 [2024-06-10 14:07:03.006744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.546 [2024-06-10 14:07:03.006754] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.546 [2024-06-10 14:07:03.006887] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:38:48.546 [2024-06-10 14:07:03.007016] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:38:48.546 [2024-06-10 14:07:03.007126] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:38:48.546 [2024-06-10 14:07:03.007126] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 Malloc0 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 [2024-06-10 14:07:03.775191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 [2024-06-10 14:07:03.803464] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1649691 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:49.478 14:07:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:49.478 EAL: No free 2048 kB hugepages reported on node 1 00:38:51.375 14:07:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1649415 00:38:51.375 14:07:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 [2024-06-10 14:07:05.833467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 [2024-06-10 14:07:05.833785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Read completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.375 Write completed with error (sct=0, sc=8) 00:38:51.375 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 [2024-06-10 14:07:05.834096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Write completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 Read completed with error (sct=0, sc=8) 00:38:51.376 starting I/O failed 00:38:51.376 [2024-06-10 14:07:05.834314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:51.376 [2024-06-10 14:07:05.834658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.834684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.834943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.834961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.835330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.835370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.835699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.835740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.836103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.836143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.836539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.836586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.836891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.836908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.837207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.837223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.837479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.837496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.837774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.837790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.838094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.838110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.838383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.838399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.838709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.838726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.839003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.839043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.839276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.839315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.839623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.839664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.839903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.839946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.840194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.840210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.840481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.840521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.840850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.376 [2024-06-10 14:07:05.840890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.376 qpair failed and we were unable to recover it. 00:38:51.376 [2024-06-10 14:07:05.841168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.841185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.841454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.841470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.841727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.841744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.841938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.841955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.842224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.842240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.842422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.842438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.842742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.842759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.843005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.843021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.843402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.843426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.843758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.843776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.844018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.844067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.844468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.844508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.844886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.377 [2024-06-10 14:07:05.844926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.377 qpair failed and we were unable to recover it. 00:38:51.377 [2024-06-10 14:07:05.845217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.845234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.845559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.845579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.845795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.845835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.846147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.846173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.846505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.846549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.846856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.846897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.847297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.847337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.847713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.847755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.848124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.848139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.848406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.848418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.848756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.848768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.849105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.849117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.849462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.849477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.849764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.849776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.850091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.850103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.646 qpair failed and we were unable to recover it. 00:38:51.646 [2024-06-10 14:07:05.850418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.646 [2024-06-10 14:07:05.850435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.850698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.850711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.851029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.851069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.851372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.851412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.851759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.851798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.852175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.852214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.852602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.852643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.853006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.853046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.853414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.853453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.853826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.853866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.854206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.854218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.854529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.854541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.854865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.854905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.855278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.855316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.855663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.855702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.856064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.856076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.856312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.856324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.856629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.856642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.856878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.856890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.857178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.857190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.857514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.857554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.857880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.857920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.858267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.858279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.858526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.858565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.858939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.858978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.859252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.859264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.859562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.859574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.859864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.859876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.860131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.860143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.860362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.860374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.860621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.860633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.860859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.860871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.861204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.861216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.861522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.861560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.861930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.861970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.862253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.862292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.647 qpair failed and we were unable to recover it. 00:38:51.647 [2024-06-10 14:07:05.862663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.647 [2024-06-10 14:07:05.862703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.863039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.863051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.863332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.863344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.863665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.863677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.863992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.864004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.864368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.864406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.864685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.864725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.864999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.865011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.865365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.865376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.865703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.865743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.866123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.866162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.866516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.866538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.866840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.866881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.867237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.867276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.867625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.867666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.868059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.868098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.868475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.868514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.868819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.868858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.869230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.869269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.869616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.869657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.870034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.870073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.870298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.870337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.870729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.870769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.871155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.871193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.871561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.871617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.871917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.871957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.872305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.872345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.872731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.872772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.873008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.873020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.873359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.873371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.873648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.873687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.874059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.874099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.874334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.874346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.874604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.874616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.874926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.874938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.648 qpair failed and we were unable to recover it. 00:38:51.648 [2024-06-10 14:07:05.875253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.648 [2024-06-10 14:07:05.875293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.875631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.875671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.876080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.876119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.876446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.876486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.876850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.876890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.877266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.877304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.877651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.877710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.878087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.878127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.878526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.878564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.878925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.878965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.879351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.879391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.879743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.879783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.880171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.880210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.880586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.880625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.880972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.881012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.881390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.881429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.881802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.881842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.882216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.882255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.882572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.882621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.882911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.882950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.883319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.883358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.883675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.883715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.884039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.884078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.884446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.884485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.884857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.884897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.885206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.885246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.885616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.885657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.886007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.886045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.886340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.886352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.886652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.886665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.886952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.886964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.887260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.887272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.887614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.887653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.888034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.888074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.649 [2024-06-10 14:07:05.888454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.649 [2024-06-10 14:07:05.888494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.649 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.888870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.888910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.889270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.889282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.889570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.889586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.889813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.889824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.890057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.890069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.890356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.890368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.890653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.890665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.890944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.890956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.891276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.891315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.891689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.891729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.892102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.892141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.892431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.892443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.892803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.892843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.893149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.893188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.893559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.893616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.893988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.894026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.894355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.894367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.894621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.894633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.894953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.894993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.895341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.895381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.895700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.895739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.896116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.896156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.896501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.896541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.896918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.896957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.897327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.897366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.897746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.897785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.898103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.898142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.898513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.898552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.898949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.898988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.899354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.899393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.899766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.899807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.900170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.900209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.900601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.900641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.901006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.901046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.901433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.901477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.901796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.901837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.902130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.650 [2024-06-10 14:07:05.902142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.650 qpair failed and we were unable to recover it. 00:38:51.650 [2024-06-10 14:07:05.902341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.902352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.902584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.902596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.902845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.902857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.903166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.903178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.903462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.903474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.903785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.903798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.904117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.904156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.904526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.904566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.904954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.904993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.905284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.905324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.905694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.905734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.906115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.906155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.906520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.906559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.906916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.906956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.907341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.907380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.907748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.907788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.908174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.908213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.908491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.908503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.908739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.908751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.908986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.908998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.909311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.909322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.909611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.909623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.909935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.909981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.910338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.910377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.910763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.910804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.911150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.911189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.651 qpair failed and we were unable to recover it. 00:38:51.651 [2024-06-10 14:07:05.911563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.651 [2024-06-10 14:07:05.911616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.911986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.912025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.912403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.912442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.912669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.912709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.913077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.913116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.913489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.913528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.913926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.913967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.914340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.914379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.914747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.914787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.915141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.915180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.915543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.915590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.915939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.915989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.916360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.916399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.916776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.916827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.917222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.917269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.917555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.917567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.917873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.917885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.918174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.918186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.918446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.918458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.918741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.918753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.919041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.919053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.919374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.919386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.919693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.919733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.920089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.920128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.920434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.920446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.920705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.920717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.921027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.921039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.921371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.921411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.921656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.921696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.921978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.922017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.922342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.922382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.922728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.922740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.923058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.923097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.923465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.923504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.923885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.923925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.924289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.924328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.924654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.924667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.652 [2024-06-10 14:07:05.924989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.652 [2024-06-10 14:07:05.925028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.652 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.925330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.925370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.925745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.925785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.926156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.926195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.926486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.926498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.926756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.926768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.927031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.927044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.927280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.927292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.927531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.927543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.927853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.927866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.928193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.928232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.928603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.928644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.929017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.929056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.929430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.929469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.929839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.929885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.930260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.930298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.930535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.930547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.930878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.930891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.931188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.931200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.931495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.931535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.931960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.932043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.932464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.932543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.932883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.932927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.933284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.933325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.933675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.933716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.934097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.934137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.934510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.934550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.934935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.934975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.935300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.935340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.935594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.935634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.936008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.936047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.936366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.936405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.936779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.936819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.937191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.937230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.937603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.937643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.937936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.937975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.938261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.938300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.938695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.653 [2024-06-10 14:07:05.938735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.653 qpair failed and we were unable to recover it. 00:38:51.653 [2024-06-10 14:07:05.939055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.939096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.939475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.939513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.939893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.939933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.940320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.940360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.940662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.940702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.941052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.941091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.941481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.941521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.941898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.941939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.942323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.942362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.942735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.942774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.943085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.943134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.943444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.943464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.943772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.943791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.944072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.944091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.944345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.944364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.944638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.944657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.944940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.944967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.945316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.945357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.945731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.945771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.946144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.946183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.946504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.946543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.946929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.946968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.947339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.947378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.947668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.947708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.948080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.948119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.948495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.948534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.948809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.948850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.949130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.949171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.949421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.949441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.949743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.949763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.950025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.950045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.950393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.950412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.950739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.950759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.951064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.951084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.951334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.951374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.951688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.951728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.952013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.952053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.654 qpair failed and we were unable to recover it. 00:38:51.654 [2024-06-10 14:07:05.952431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.654 [2024-06-10 14:07:05.952471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.952846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.952886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.953249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.953289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.953669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.953711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.954012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.954052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.954354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.954394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.954750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.954793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.955151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.955191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.955551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.955602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.955978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.956018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.956368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.956407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.956797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.956837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.957182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.957201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.957435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.957454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.957686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.957706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.958040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.958059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.958396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.958415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.958754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.958774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.959097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.959136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.959417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.959463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.959839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.959880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.960250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.960269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.960521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.960540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.960875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.960896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.961208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.961229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.961598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.961639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.961969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.962009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.962376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.962396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.962725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.962745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.963080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.963099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.963425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.963444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.963773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.963793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.964133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.964152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.964493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.964533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.964855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.964896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.965270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.965309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.965595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.965636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.655 [2024-06-10 14:07:05.966008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.655 [2024-06-10 14:07:05.966047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.655 qpair failed and we were unable to recover it. 00:38:51.656 [2024-06-10 14:07:05.966422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.656 [2024-06-10 14:07:05.966461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.656 qpair failed and we were unable to recover it. 00:38:51.656 [2024-06-10 14:07:05.966829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.656 [2024-06-10 14:07:05.966870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.656 qpair failed and we were unable to recover it. 00:38:51.656 [2024-06-10 14:07:05.967161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.656 [2024-06-10 14:07:05.967201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.656 qpair failed and we were unable to recover it. 00:38:51.656 [2024-06-10 14:07:05.967566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.656 [2024-06-10 14:07:05.967616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.656 qpair failed and we were unable to recover it. 00:38:51.656 [2024-06-10 14:07:05.967933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.656 [2024-06-10 14:07:05.967980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.656 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.968307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.968327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.968627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.968647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.969036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.969056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.969330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.969371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.969672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.969712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.970003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.970042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.970358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.970397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.970766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.970806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.971182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.971223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.971598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.971639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.972013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.972052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.972377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.972417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.972769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.972810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.973182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.973221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.973609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.973649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.974020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.974061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.974433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.974478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.974798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.974840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.975136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.975175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.975528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.975567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.975954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.975994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.976308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.976327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.976596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.976616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.976992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.977032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.977428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.977468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.977839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.977879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.978203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.978223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.978548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.978568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.978764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.978785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.979038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.979057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.979401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.979421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.979753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.979783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.980089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.980109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.980471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.980510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.980919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.657 [2024-06-10 14:07:05.980959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.657 qpair failed and we were unable to recover it. 00:38:51.657 [2024-06-10 14:07:05.981334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.981374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.981657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.981698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.982006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.982045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.982396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.982442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.982758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.982798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.983168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.983207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.983562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.983615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.983980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.984020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.984316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.984336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.984618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.984637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.984967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.984987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.985298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.985317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.985685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.985725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.986099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.986139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.986437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.986477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.986854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.986895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.987250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.987289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.987685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.987725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.988102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.988142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.988453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.988472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.988730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.988750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.989082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.989104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.989447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.989467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.989797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.989817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.990072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.990111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.990443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.990482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.990859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.990879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.991210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.991229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.991568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.991639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.992018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.992058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.992347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.992386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.992753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.992794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.993017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.993057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.993426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.993465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.993842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.993883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.994268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.994308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.994685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.658 [2024-06-10 14:07:05.994726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.658 qpair failed and we were unable to recover it. 00:38:51.658 [2024-06-10 14:07:05.994961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.995000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.995383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.995423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.995800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.995841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.996220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.996240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.996556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.996608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.997006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.997048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.997361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.997402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.997769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.997789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.998004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.998025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.998351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.998371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.998701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.998720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.998990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.999030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.999441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.999480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:05.999842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:05.999863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.000179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.000218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.000552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.000606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.000984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.001024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.001387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.001427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.001806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.001826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.002163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.002183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.002524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.002565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.002944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.002984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.003291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.003331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.003660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.003701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.004010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.004059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.004384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.004425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.004745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.004766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.004973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.004993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.005372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.005411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.005788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.005829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.006137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.006177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.006507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.006526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.006869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.006889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.007165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.007184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.007518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.007559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.007953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.007995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.008228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.008268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.659 [2024-06-10 14:07:06.008572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.659 [2024-06-10 14:07:06.008623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.659 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.008958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.008998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.009389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.009429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.009818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.009860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.010239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.010279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.010582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.010603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.010854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.010874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.011208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.011257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.011542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.011606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.011919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.011959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.012263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.012303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.012678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.012698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.012994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.013014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.013357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.013396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.013805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.013847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.014137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.014177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.014476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.014515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.014914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.014954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.015263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.015303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.015681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.015722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.016012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.016052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.016438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.016478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.016838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.016879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.017201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.017240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.017552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.017607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.017977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.018017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.018399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.018439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.018811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.018865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.019120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.019160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.019515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.019555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.019943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.019983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.020360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.020400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.020704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.020744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.021127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.021167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.021467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.021506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.021795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.021816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.022135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.660 [2024-06-10 14:07:06.022155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.660 qpair failed and we were unable to recover it. 00:38:51.660 [2024-06-10 14:07:06.022440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.022459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.022709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.022730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.023067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.023087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.023361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.023400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.023791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.023833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.024218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.024258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.024597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.024617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.024828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.024848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.025181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.025201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.025536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.025556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.025904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.025952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.026251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.026291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.026650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.026690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.027066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.027106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.027469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.027509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.027886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.027926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.028304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.028343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.028718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.028739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.029011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.029051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.029433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.029473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.029783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.029804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.030012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.030032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.030229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.030249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.030611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.030652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.030996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.031036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.031396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.031436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.031797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.031838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.032227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.032267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.032619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.032671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.033057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.033096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.033401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.033449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.661 [2024-06-10 14:07:06.033782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.661 [2024-06-10 14:07:06.033804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.661 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.034084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.034104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.034443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.034483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.034782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.034823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.035199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.035239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.035620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.035661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.036043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.036083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.036391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.036432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.036793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.036834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.037183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.037222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.037606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.037646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.037934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.037974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.038312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.038351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.038757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.038779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.039115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.039135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.039505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.039545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.039929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.039989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.040362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.040402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.040807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.040848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.041186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.041227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.041488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.041507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.041764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.041784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.042054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.042074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.042396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.042415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.042661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.042681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.042892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.042912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.043160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.043184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.043478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.043498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.043815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.043836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.044125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.044164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.044453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.044473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.044735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.044755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.044959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.044979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.045221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.045242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.045506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.045526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.045769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.045789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.046126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.046147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.046416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.662 [2024-06-10 14:07:06.046464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.662 qpair failed and we were unable to recover it. 00:38:51.662 [2024-06-10 14:07:06.046850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.046892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.047240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.047280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.047679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.047700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.047950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.047992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.048308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.048348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.048738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.048780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.049164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.049203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.049569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.049619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.049942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.049982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.050367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.050407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.050802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.050843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.051231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.051272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.051668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.051709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.052040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.052080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.052479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.052521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.052814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.052855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.053221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.053261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.053656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.053679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.053976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.054017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.054388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.054428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.054757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.054778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.055125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.055145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.055402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.055423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.055807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.055848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.056211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.056251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.056564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.056591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.056934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.056984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.057398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.057439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.057765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.057789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.058063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.058113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.058384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.058424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.058784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.058805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.058988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.059009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.059383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.059423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.059802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.059842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.060181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.663 [2024-06-10 14:07:06.060221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.663 qpair failed and we were unable to recover it. 00:38:51.663 [2024-06-10 14:07:06.060610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.060651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.061031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.061071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.061482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.061525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.061861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.061902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.062274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.062316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.062701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.062741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.063052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.063072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.063333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.063353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.063623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.063644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.063957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.063978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.064318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.064358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.064688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.064729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.065069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.065109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.065521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.065561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.065910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.065950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.066277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.066317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.066691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.066712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.067033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.067053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.067376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.067417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.067737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.067778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.068075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.068095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.068293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.068314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.068591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.068611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.068857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.068877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.069165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.069184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.069474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.069494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.069834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.069855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.070187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.070227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.070632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.070673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.070961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.070981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.071264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.071284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.071622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.071642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.071988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.072012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.072229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.072249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.072566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.072593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.072916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.072936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.073205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.664 [2024-06-10 14:07:06.073225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.664 qpair failed and we were unable to recover it. 00:38:51.664 [2024-06-10 14:07:06.073491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.073511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.073889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.073930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.074185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.074225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.074614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.074658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.074969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.075009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.075421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.075464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.075844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.075889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.076147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.076187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.076587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.076629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.077017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.077057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.077416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.077456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.077845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.077865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.078110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.078131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.078474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.078495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.078715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.078736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.079078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.079098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.079462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.079482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.079786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.079807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.080080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.080100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.080418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.080438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.080767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.080808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.081096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.081137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.081530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.081569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.081970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.082010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.082368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.082408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.082715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.082736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.083019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.083059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.083350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.083390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.083718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.083759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.084090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.084110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.084442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.084462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.084741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.084762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.085041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.085062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.085441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.085465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.085803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.085824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.086165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.086191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.086470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.665 [2024-06-10 14:07:06.086490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.665 qpair failed and we were unable to recover it. 00:38:51.665 [2024-06-10 14:07:06.086794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.086816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.087165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.087185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.087518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.087538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.087866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.087887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.088077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.088098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.088441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.088460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.088818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.088838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.089083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.089103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.089454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.089474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.089750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.089770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.090052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.090072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.090374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.090394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.090746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.090766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.090956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.090975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.091231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.091251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.091604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.091624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.091918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.091938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.092213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.092233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.092554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.092573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.092938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.092958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.093270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.093289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.093648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.093668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.093995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.094015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.094297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.094317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.094657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.094677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.095024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.095044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.095317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.095337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.095658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.095678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.095975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.095995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.096313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.096334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.096693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.666 [2024-06-10 14:07:06.096713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.666 qpair failed and we were unable to recover it. 00:38:51.666 [2024-06-10 14:07:06.097057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.097077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.097362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.097381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.097694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.097715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.098000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.098020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.098365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.098386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.098727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.098748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.099012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.099031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.099276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.099299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.099668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.099688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.100030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.100050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.100360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.100380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.100714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.100734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.101082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.101102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.101433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.101453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.101788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.101808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.102125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.102144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.102539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.102559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.102871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.102891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.103164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.103184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.103403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.103423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.103787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.103808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.104077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.104097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.104478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.104497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.104827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.104847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.105147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.105166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.105529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.105548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.105798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.105819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.106090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.106109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.667 [2024-06-10 14:07:06.106461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.667 [2024-06-10 14:07:06.106481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.667 qpair failed and we were unable to recover it. 00:38:51.938 [2024-06-10 14:07:06.106769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.938 [2024-06-10 14:07:06.106790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.938 qpair failed and we were unable to recover it. 00:38:51.938 [2024-06-10 14:07:06.107129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.938 [2024-06-10 14:07:06.107149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.938 qpair failed and we were unable to recover it. 00:38:51.938 [2024-06-10 14:07:06.107497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.107517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.107847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.107868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.108122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.108141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.108503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.108523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.108843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.108862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.109085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.109105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.109445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.109465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.109731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.109752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.110041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.110061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.110282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.110301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.110620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.110640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.110996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.111016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.111287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.111307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.111494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.111514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.111834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.111855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.112221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.112241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.112588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.112610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.112901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.112921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.113169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.113189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.113525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.113545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.113814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.113834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.114038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.114058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.114392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.114412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.114743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.114763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.115083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.115102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.115459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.115479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.115708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.115728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.115949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.115969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.116166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.116186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.116454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.116473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.116824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.116844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.117051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.117071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.117449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.117468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.117714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.117734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.118075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.118095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.118350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.118370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.118713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.939 [2024-06-10 14:07:06.118733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.939 qpair failed and we were unable to recover it. 00:38:51.939 [2024-06-10 14:07:06.118999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.119018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.119241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.119261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.119598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.119619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.119891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.119911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.120200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.120219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.120602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.120622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.120838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.120857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.121128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.121148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.121391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.121411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.121721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.121741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.122002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.122022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.122291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.122311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.122559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.122594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.122883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.122903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.123222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.123242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.123448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.123468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.123793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.123813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.124085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.124105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.124437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.124457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.124704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.124728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.125062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.125082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.125422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.125442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.125775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.125796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.126073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.126092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.126468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.126487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.126750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.126770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.127089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.127108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.127352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.127372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.127619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.127639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.127949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.127969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.128234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.128253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.128531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.128550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.128831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.128851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.129190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.129210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.940 qpair failed and we were unable to recover it. 00:38:51.940 [2024-06-10 14:07:06.129475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.940 [2024-06-10 14:07:06.129495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.129765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.129785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.129997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.130017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.130343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.130363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.130714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.130734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.131046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.131066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.131397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.131416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.131757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.131777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.131993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.132013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.132255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.132275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.132638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.132658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.132862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.132882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.133085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.133105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.133364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.133384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.133633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.133652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.133913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.133932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.134192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.134211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.134480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.134499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.134779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.134798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.135142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.135162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.135522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.135542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.135856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.135877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.136242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.136262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.136586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.136606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.136924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.136944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.137208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.137231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.137511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.137530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.137856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.137876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.138148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.138168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.138506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.138526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.138807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.138826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.139086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.139106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.139437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.139456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.139788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.139808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.140153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.140173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.140515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.140535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.140829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.941 [2024-06-10 14:07:06.140848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.941 qpair failed and we were unable to recover it. 00:38:51.941 [2024-06-10 14:07:06.141127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.141147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.141466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.141485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.141756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.141776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.142116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.142135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.142494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.142513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.142841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.142862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.143210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.143230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.143501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.143520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.143839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.143859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.144135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.144155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.144516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.144536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.144867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.144887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.145199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.145218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.145497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.145517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.145781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.145801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.146087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.146107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.146444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.146464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.146781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.146801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.147078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.147098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.147443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.147462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.147773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.147793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.148076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.148095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.148386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.148406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.148688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.148708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.148946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.148966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.149247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.149266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.149481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.149501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.149776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.149796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.150111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.150134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.150485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.150504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.150787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.150807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.151116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.151135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.151497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.151517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.151813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.151833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.152192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.152212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.152469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.152489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.942 qpair failed and we were unable to recover it. 00:38:51.942 [2024-06-10 14:07:06.152785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.942 [2024-06-10 14:07:06.152805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.153123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.153143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.153422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.153442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.153706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.153725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.154099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.154119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.154316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.154335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.154687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.154707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.155041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.155060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.155317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.155336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.155670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.155691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.156009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.156029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.156383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.156402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.156746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.156766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.157101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.157121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.157395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.157414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.157748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.157767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.157957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.157977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.158223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.158243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.158604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.158624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.158898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.158918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.159172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.159191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.159514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.159533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.159777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.159797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.160108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.160127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.160489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.160509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.160828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.160848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.161207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.161227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.161547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.161567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.161789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.161808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.162133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.162152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.162492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.943 [2024-06-10 14:07:06.162511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.943 qpair failed and we were unable to recover it. 00:38:51.943 [2024-06-10 14:07:06.162822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.162842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.163155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.163177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.163488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.163507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.163778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.163798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.164003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.164023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.164327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.164346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.164605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.164624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.164934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.164953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.165236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.165255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.165609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.165629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.165961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.165981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.166247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.166266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.166502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.166521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.166783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.166803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.167120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.167139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.167501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.167521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.167818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.167838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.168125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.168145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.168394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.168413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.168729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.168749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.169003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.169023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.169382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.169401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.169730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.169750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.169988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.170008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.170379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.170400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.170737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.170757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.171092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.171112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.171458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.171477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.171752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.171772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.172105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.172124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.172375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.172395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.172649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.172669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.172996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.173016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.173356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.173375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.173647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.173667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.173974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.173994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.174270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.174290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.944 qpair failed and we were unable to recover it. 00:38:51.944 [2024-06-10 14:07:06.174615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.944 [2024-06-10 14:07:06.174635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.174896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.174915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.175247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.175266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.175527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.175546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.175878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.175901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.176159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.176178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.176508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.176527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.176779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.176798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.177110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.177129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.177433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.177452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.177717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.177736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.178068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.178088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.178374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.178393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.178698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.178718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.179073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.179093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.179377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.179397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.179742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.179762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.180094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.180114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.180377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.180397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.180701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.180721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.181033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.181052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.181262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.181281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.181462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.181482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.181822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.181841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.182022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.182042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.182234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.182254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.182490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.182510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.182807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.182827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.183136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.183155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.183420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.183439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.183816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.183836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.184154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.184173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.184480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.184499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.184814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.184833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.185139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.185159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.185522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.185541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.185865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.185885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.186192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.186211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.945 qpair failed and we were unable to recover it. 00:38:51.945 [2024-06-10 14:07:06.186563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.945 [2024-06-10 14:07:06.186591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.186853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.186873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.187126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.187146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.187488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.187508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.187843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.187863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.188170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.188190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.188545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.188568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.188830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.188859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.189070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.189090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.189415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.189436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.189691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.189711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.190015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.190035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.190249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.190268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.190599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.190620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.190829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.190849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.191059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.191080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.191359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.191378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.191709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.191729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.191935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.191954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.192211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.192231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.192589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.192609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.192901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.192920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.193298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.193317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.193669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.193689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.193956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.193976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.194228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.194247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.194482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.194501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.194754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.194774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.195022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.195041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.195260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.195280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.195531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.195550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.195816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.195836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.196097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.196117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.196366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.196385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.196692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.196712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.197036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.197056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.197407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.197427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.946 [2024-06-10 14:07:06.197599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.946 [2024-06-10 14:07:06.197618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.946 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.197878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.197898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.198103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.198123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.198487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.198507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.198776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.198796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.199037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.199056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.199416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.199435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.199698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.199718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.199904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.199924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.200183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.200205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.200545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.200565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.200921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.200941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.201256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.201275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.201545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.201564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.201841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.201861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.202121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.202140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.202475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.202494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.202807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.202827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.203094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.203113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.203465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.203484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.203757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.203777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.204034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.204053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.204314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.204334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.204664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.204683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.204928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.204947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.205216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.205236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.205588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.205608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.205887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.205906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.206157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.206177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.206433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.206452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.206796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.206816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.207171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.207190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.207388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.207407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.207722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.207741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.208088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.208107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.208371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.208390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.208722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.208744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.209082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.209101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.947 [2024-06-10 14:07:06.209431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.947 [2024-06-10 14:07:06.209450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.947 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.209688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.209708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.209969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.209988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.210349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.210368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.210642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.210662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.210978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.210997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.211350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.211369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.211715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.211735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.212067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.212086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.212430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.212450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.212783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.212803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.213078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.213096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.213432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.213451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.213779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.213799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.214142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.214161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.214365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.214384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.214620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.214639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.214850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.214869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.215127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.215146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.215446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.215465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.215782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.215802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.216079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.216098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.216360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.216379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.216643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.216662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.216973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.217013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.217244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.217284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.217685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.217725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.218095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.218134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.218492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.218532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.218955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.219041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.219412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.219456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.219848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.219893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.220258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.948 [2024-06-10 14:07:06.220298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.948 qpair failed and we were unable to recover it. 00:38:51.948 [2024-06-10 14:07:06.220597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.220618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.220890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.220931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.221234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.221273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.221632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.221673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.222027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.222067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.222377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.222399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.222654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.222673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.222927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.222973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.223197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.223236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.223613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.223654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.224027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.224065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.224335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.224355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.224737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.224778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.225028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.225047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.225357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.225397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.225763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.225807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.226069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.226088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.226398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.226437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.226684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.226724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.226984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.227024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.227331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.227371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.227738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.227757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.227960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.227979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.228226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.228246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.228560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.228639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.228966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.229006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.229255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.229294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.229675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.229724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.229990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.230031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.230389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.230428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.230811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.230851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.231239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.231280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.231675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.231715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.232068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.232107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.232405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.232445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.232691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.232731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.233102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.233141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.233515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.949 [2024-06-10 14:07:06.233554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.949 qpair failed and we were unable to recover it. 00:38:51.949 [2024-06-10 14:07:06.233881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.233928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.234305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.234345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.234736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.234776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.235166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.235206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.235593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.235634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.235895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.235936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.236239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.236258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.236558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.236619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.236997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.237037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.237353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.237393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.237773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.237793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.238035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.238054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.238431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.238470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.238790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.238831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.239176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.239194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.239444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.239463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.239779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.239799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.240071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.240110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.240423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.240462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.240752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.240772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.241037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.241076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.241486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.241526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.241846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.241887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.242208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.242226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.242565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.242618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.242879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.242919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.243210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.243229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.243560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.243611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.243936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.243975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.244277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.244296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.244627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.244669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.244974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.245013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.245262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.245281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.245622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.245642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.245903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.245949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.246291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.246329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.246640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.246680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.950 qpair failed and we were unable to recover it. 00:38:51.950 [2024-06-10 14:07:06.246978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.950 [2024-06-10 14:07:06.247019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.247446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.247485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.247788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.247828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.248087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.248127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.248478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.248497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.248846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.248886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.249180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.249219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.249488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.249527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.249844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.249884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.250192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.250211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.250484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.250530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.250841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.250882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.251183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.251223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.251599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.251639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.251892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.251932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.252184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.252223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.252603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.252644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.252944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.252983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.253377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.253396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.253599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.253639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.253948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.253988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.254298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.254318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.254656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.254676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.254869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.254888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.255227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.255268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.255594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.255635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.255936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.255976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.256311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.256351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.256662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.256703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.257054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.257093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.257415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.257454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.257766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.257807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.258074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.258115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.258502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.258541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.258791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.258831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.259075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.951 [2024-06-10 14:07:06.259114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.951 qpair failed and we were unable to recover it. 00:38:51.951 [2024-06-10 14:07:06.259439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.259479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.259774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.259814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.260175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.260195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.260517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.260556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.260863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.260904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.261206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.261225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.261568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.261636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.261943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.261983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.262307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.262327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.262528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.262547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.262851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.262892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.263148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.263188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.263553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.263605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.263925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.263965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.264217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.264262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.264545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.264597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.264952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.264993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.265287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.265327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.265702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.265743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.266090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.266130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.266443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.266462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.266794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.266834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.267077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.267117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.267438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.267477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.267767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.267808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.268059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.268078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.268287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.268306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.268561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.268613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.268874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.268914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.269204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.269223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.269509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.269548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.269960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.270000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.270396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.270436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.270750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.270791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.271042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.952 [2024-06-10 14:07:06.271061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.952 qpair failed and we were unable to recover it. 00:38:51.952 [2024-06-10 14:07:06.271398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.271438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.271726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.271768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.272133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.272172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.272528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.272567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.272900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.272940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.273250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.273270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.273619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.273659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.273970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.274009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.274469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.274508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.274904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.274947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.275289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.275309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.275623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.275663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.276035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.276075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.276401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.276440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.276822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.276862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.277146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.277166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.277515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.277554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.277945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.277995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.278330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.278349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.278682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.278728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.279057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.279077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.279403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.279444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.279688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.279729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.280031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.280071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.282392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.282431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.282800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.282824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.283082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.283101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.283442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.283483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.283877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.283918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.284291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.284311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.284641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.284661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.284992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.285011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.953 [2024-06-10 14:07:06.285738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.953 [2024-06-10 14:07:06.285766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.953 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.286134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.286154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.286356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.286375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.286614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.286654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.286888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.286928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.287307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.287346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.287717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.287758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.289134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.289167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.289546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.289600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.289858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.289897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.290132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.290151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.290426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.290445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.290733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.290753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.291028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.291047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.291195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.291215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.291444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.291463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.291714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.291758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.292082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.292122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.292498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.292517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.293725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.293759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.293996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.294016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.294361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.294387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.294753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.294789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.295018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.295050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.295338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.295367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.295726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.295748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.296050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.296070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.296440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.296464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.296717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.296737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.297044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.297063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.297439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.297458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.297789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.297809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.298115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.298134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.298534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.298553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.298821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.298841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.299163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.299182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.299454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.299473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.299788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.299808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.954 qpair failed and we were unable to recover it. 00:38:51.954 [2024-06-10 14:07:06.300063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.954 [2024-06-10 14:07:06.300082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.300287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.300306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.300557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.300596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.300860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.300880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.301072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.301091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.301365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.301384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.301663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.301683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.301984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.302003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.302316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.302335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.302687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.302707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.302851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.302870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.303184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.303203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.303507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.303526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.303804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.303824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.304106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.304125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.304334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.304354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.304605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.304625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.304895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.304914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.305238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.305257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.305535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.305554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.305846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.305866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.306216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.306236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.306492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.306512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.306754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.306774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.306957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.306976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.307226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.307245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.307584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.307604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.307861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.307880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.308207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.308226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.308531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.308553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.308847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.308925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.309251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.309295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.309599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.309639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.309863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.309896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.310111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.310132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.310483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.310502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.310823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.955 [2024-06-10 14:07:06.310843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.955 qpair failed and we were unable to recover it. 00:38:51.955 [2024-06-10 14:07:06.311050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.311069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.311263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.311282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.311562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.311589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.311850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.311869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.312141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.312160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.312401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.312420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.312683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.312703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.312952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.312972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.313233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.313252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.313541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.313560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.313832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.313852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.314159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.314178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.314543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.314562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.314829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.314848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.315127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.315147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.315474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.315493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.315827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.315847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.316013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.316032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.316235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.316254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.316561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.316589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.316886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.316905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.317091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.317110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.317417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.317436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.317691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.317711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.318036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.318055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.318400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.318419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.318748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.318767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.319023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.319043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.319371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.319391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.319731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.319751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.320046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.320066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.320334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.320353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.320695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.320718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.320988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.321013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.321259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.321272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.956 [2024-06-10 14:07:06.321585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.956 [2024-06-10 14:07:06.321598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.956 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.321885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.321897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.322135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.322147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.322482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.322494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.322753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.322766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.323096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.323108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.323352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.323364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.323664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.323676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.323965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.323978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.324202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.324213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.324551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.324563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.324889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.324901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.325058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.325070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.325358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.325370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.325680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.325692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.325943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.325955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.326195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.326207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.326519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.326531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.326851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.326863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.327092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.327104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.327396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.327409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.327695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.327707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.327861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.327873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.328094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.328106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.328366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.328380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.328686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.328699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.329010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.329022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.329240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.329252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.329503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.329516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.329757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.329769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.330022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.330034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.330216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.330227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.957 qpair failed and we were unable to recover it. 00:38:51.957 [2024-06-10 14:07:06.330537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.957 [2024-06-10 14:07:06.330549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.330815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.330827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.331139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.331151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.331456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.331468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.331772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.331784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.332091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.332103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.332285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.332298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.332458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.332470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.332772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.332785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.333071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.333084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.333390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.333402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.333708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.333720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.334038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.334050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.334369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.334381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.334687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.334699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.335001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.335013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.335259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.335271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.335584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.335596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.335833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.335845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.336165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.336177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.336431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.336444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.336741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.336754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.337060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.337072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.337385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.337398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.337660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.337674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.338004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.338017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.338363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.338375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.338713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.338726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.339000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.339013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.339222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.339233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.339456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.339468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.339699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.339712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.340022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.340037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.340347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.340361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.340600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.340613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.340912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.340924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.341235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.958 [2024-06-10 14:07:06.341247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.958 qpair failed and we were unable to recover it. 00:38:51.958 [2024-06-10 14:07:06.341486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.341498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.341691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.341702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.342025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.342037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.342248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.342260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.342580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.342592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.342880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.342892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.343155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.343168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.343347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.343358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.343659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.343674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.343940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.343953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.344130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.344141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.344372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.344384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.344562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.344573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.344774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.344786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.344946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.344957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.345209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.345222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.345520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.345533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.345788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.345800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.346086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.346098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.346331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.346344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.346528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.346539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.346777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.346790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.347043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.347055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.347384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.347396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.347694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.347706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.347940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.347952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.348183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.348200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.348421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.348433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.348664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.348677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.348908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.348920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.349160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.349172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.349482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.349496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.349684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.349696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.349812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.349823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.350082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.350093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.350275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.350289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.350580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.350592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.959 [2024-06-10 14:07:06.350890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.959 [2024-06-10 14:07:06.350902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.959 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.351134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.351148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.351318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.351330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.351551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.351565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.351810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.351823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.352118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.352129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.352384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.352397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.352557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.352569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.352818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.352830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.353064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.353076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.353243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.353286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.353567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.353585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.353813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.353825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.354052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.354064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.354323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.354335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.354517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.354530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.354769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.354781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.355065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.355077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.355332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.355344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.355620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.355632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.355941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.355952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.356121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.356133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.356413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.356425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.356651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.356663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.356880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.356892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.357060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.357071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.357193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.357204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.357375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.357387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.357553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.357564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.357820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.357832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.357986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.357997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.358143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.358155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.358381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.358393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.358663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.358675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.358932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.358944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.359159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.359170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.359350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.359362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.359595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.960 [2024-06-10 14:07:06.359607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.960 qpair failed and we were unable to recover it. 00:38:51.960 [2024-06-10 14:07:06.359869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.359884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.360034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.360046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.360269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.360281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.360565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.360582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.360817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.360829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.361092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.361103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.361345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.361357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.361587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.361600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.361835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.361847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.362024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.362036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.362275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.362287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.362536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.362549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.362713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.362732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.362956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.362968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.363274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.363286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.363517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.363529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.363718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.363736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.363979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.363992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.364157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.364169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.364429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.364441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.364670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.364684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.364940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.364953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.365175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.365187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.365454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.365467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.365700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.365713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.365938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.365951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.366136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.366150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.366450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.366463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.366682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.366695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.366927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.366940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.367190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.367204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.367436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.367448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.367686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.367700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.367925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.367937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.368170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.368183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.368471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.368483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.368722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.368734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.369049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.961 [2024-06-10 14:07:06.369061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.961 qpair failed and we were unable to recover it. 00:38:51.961 [2024-06-10 14:07:06.369231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.369243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.369409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.369421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.369644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.369659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.369881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.369893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.370023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.370035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.370202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.370213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.370388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.370400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.370660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.370672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.370920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.370933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.371101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.371113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.371350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.371362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.371593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.371606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.371832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.371844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.372079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.372092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.372402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.372414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.372666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.372679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.372903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.372916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.373151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.373163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.373468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.373480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.373777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.373790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.374006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.374018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.374274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.374286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.374546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.374558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.374759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.374772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.374999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.375011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.375298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.375310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.375544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.375556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.375792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.375804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.376111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.376124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.376300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.376312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.962 [2024-06-10 14:07:06.376462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.962 [2024-06-10 14:07:06.376474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.962 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.376774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.376787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.377008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.377020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.377245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.377257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.377481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.377494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.377737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.377750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.377967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.377980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.378217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.378230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.378477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.378489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.378656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.378671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.378966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.378978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.379183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.379195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.379436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.379451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.379762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.379779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.379953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.379965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.380257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.380269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.380521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.380533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.380790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.380802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.381035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.381048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.381363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.381375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.381682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.381695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.381931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.381943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.382123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.382135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.382386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.382399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.382654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.382667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.382983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.382995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.383171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.383183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.383457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.383469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.383636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.383649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.383934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.383946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.384193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.384206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.384374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.384386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.384666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.384679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.384989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.385001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.385308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.385320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.385546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.963 [2024-06-10 14:07:06.385558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.963 qpair failed and we were unable to recover it. 00:38:51.963 [2024-06-10 14:07:06.385847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.385859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.386145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.386157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.386473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.386485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.386661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.386673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.386843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.386854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.386994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.387006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.387170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.387181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.387416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.387428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.387608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.387620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.387831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.387843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.388075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.388087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.388302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.388314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.388549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.388561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.388856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.388868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.389040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.389052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.389376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.389388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.389624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.389639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.389870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.389882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.390169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.390181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.390360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.390372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.390662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.390674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.390885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.390897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.391078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.391090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.391317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.391329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.391638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.391651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.391802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.391813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.392027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.392039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.392288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.392300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.392621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.392640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.392843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.392856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.393092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.393104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.393324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.393336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.393450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.393462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.393630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.393642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.393880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.393891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.394017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.394029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.394320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.394336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.964 [2024-06-10 14:07:06.394595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.964 [2024-06-10 14:07:06.394610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.964 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.394900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.965 [2024-06-10 14:07:06.394911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.965 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.395174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.965 [2024-06-10 14:07:06.395186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.965 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.395402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.965 [2024-06-10 14:07:06.395414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.965 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.395595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.965 [2024-06-10 14:07:06.395608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.965 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.395778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.965 [2024-06-10 14:07:06.395790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.965 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.396011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:51.965 [2024-06-10 14:07:06.396030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:51.965 qpair failed and we were unable to recover it. 00:38:51.965 [2024-06-10 14:07:06.396270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.240 [2024-06-10 14:07:06.396282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.240 qpair failed and we were unable to recover it. 00:38:52.240 [2024-06-10 14:07:06.396441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.240 [2024-06-10 14:07:06.396453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.240 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.396738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.396751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.396984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.396995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.397218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.397231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.397497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.397509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.397733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.397745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.397988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.398001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.398262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.398275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.398525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.398537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.398757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.398770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.398956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.398968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.399190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.399205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.399374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.399386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.399555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.399566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.399747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.399759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.399991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.400003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.400224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.400235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.400386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.400398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.400549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.400561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.400781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.400793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.401027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.401039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.401201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.401213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.401452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.401463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.401631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.401643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.401806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.401818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.402128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.402140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.402426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.402438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.402700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.402713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.402899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.402911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.403132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.403144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.403431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.403443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.403625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.403638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.403787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.403799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.404026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.404038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.404262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.404274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.404490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.404502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.241 qpair failed and we were unable to recover it. 00:38:52.241 [2024-06-10 14:07:06.404723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.241 [2024-06-10 14:07:06.404735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.404966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.404978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.405209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.405221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.405457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.405469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.405750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.405762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.405914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.405926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.406086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.406098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.406313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.406325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.406568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.406585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.406811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.406822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.407075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.407086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.407393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.407405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.407704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.407716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.407945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.407957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.408193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.408205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.408370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.408385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.408605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.408618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.408871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.408884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.409119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.409131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.409351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.409364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.409593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.409608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.409867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.409880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.410179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.410192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.410478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.410491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.410617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.410629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.410897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.410914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.411135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.411150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.411373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.411386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.411571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.411589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.411826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.411839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.412071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.412083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.412233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.412245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.412535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.412553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.412773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.412786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.413016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.413029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.413283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.413295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.242 qpair failed and we were unable to recover it. 00:38:52.242 [2024-06-10 14:07:06.413561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.242 [2024-06-10 14:07:06.413582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.413683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.413697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.413982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.413996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.414313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.414325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.414562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.414580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.414747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.414759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.414939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.414952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.415192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.415206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.415427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.415441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.415747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.415760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.415952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.415964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.416202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.416215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.416370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.416383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.416601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.416614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.416829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.416841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.417165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.417177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.417405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.417417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.417727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.417739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.418098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.418111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.418348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.418363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.418584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.418597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.418759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.418770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.419001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.419013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.419185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.419197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.419483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.419495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.419607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.419619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.419837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.419849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.420078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.420091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.420242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.420254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.420435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.420447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.420681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.420693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.420851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.420863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.421031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.421042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.421277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.421289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.421458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.421470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.421739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.421751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.243 [2024-06-10 14:07:06.422006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.243 [2024-06-10 14:07:06.422018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.243 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.422255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.422266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.422447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.422459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.422689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.422701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.422953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.422964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.423200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.423211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.423499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.423511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.423675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.423687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.423838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.423850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.424027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.424038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.424213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.424225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.424513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.424525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.424755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.424768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.424984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.424996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.425218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.425258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.425496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.425535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.425768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.425808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.426088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.426128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.426341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.426380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.426628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.426640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.426877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.426888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.427102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.427114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.427329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.427342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.427655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.427670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.427852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.427891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.428195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.428234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.428455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.428466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.428718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.428759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.429059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.429098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.429391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.429430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.429778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.429818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.430167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.430206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.430478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.430490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.430713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.430725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.430904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.430916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.244 [2024-06-10 14:07:06.431161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.244 [2024-06-10 14:07:06.431172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.244 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.431330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.431375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.431723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.431764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.432154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.432194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.432418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.432453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.432740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.432753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.432974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.432986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.433204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.433215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.433500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.433512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.433752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.433765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.433941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.433953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.434077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.434089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.434463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.434503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.434863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.434912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.435291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.435330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.435625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.435665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.435922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.435961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.436245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.436285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.436574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.436630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.436953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.436965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.437193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.437208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.437397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.437409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.437585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.437597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.437817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.437833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.438008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.438021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.438190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.438203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.438431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.438442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.438670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.438710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.438881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.438929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.439173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.439212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.439439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.245 [2024-06-10 14:07:06.439478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.245 qpair failed and we were unable to recover it. 00:38:52.245 [2024-06-10 14:07:06.439750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.439762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.439986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.439998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.440265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.440304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.440537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.440587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.440883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.440923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.441201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.441241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.441560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.441613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.441838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.441877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.442179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.442219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.442400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.442412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.442643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.442683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.442913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.442953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.443296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.443335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.443616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.443664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.443814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.443825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.444119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.444131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.444297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.444309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.444555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.444629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.444926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.444965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.445190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.445229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.445536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.445587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.445809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.445822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.446058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.446070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.446253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.446265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.446516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.446556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.446844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.446884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.447190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.447229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.447528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.447567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.447861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.447901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.448192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.448232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.448470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.448504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.448680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.448693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.448853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.448865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.449157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.449196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.449542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.449590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.449823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.449864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.450167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.246 [2024-06-10 14:07:06.450207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.246 qpair failed and we were unable to recover it. 00:38:52.246 [2024-06-10 14:07:06.450481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.450495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.450740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.450753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.450919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.450930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.451079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.451090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.451359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.451398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.451565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.451615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.451931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.451971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.452213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.452252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.452553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.452608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.452900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.452940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.453231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.453270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.453558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.453610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.453890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.453929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.454207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.454247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.454534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.454573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.454884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.454924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.455145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.455184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.455477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.455516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.455752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.455764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.455986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.455998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.456165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.456177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.456404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.456443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.456810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.456850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.457149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.457188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.457519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.457532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.457673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.457685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.457923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.457935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.458164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.458177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.458347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.458359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.458559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.458630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.458843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.458882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.459239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.459278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.459515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.459554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.459932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.459972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.460201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.460241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.460523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.460562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.460868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.247 [2024-06-10 14:07:06.460909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.247 qpair failed and we were unable to recover it. 00:38:52.247 [2024-06-10 14:07:06.461204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.461243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.461468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.461518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.461767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.461780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.462014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.462026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.462193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.462205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.462360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.462372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.462603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.462615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.462937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.462950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.463187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.463199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.463987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.464008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.464220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.464232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.464418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.464430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.464738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.464751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.464922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.464935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.465181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.465221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.465521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.465560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.465888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.465900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.466120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.466134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.466302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.466314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.466531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.466542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.466799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.466812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.466974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.466986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.467236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.467248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.467536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.467548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.467703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.467715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.467879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.467890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.468090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.468103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.468254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.468266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.468591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.468632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.468858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.468898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.469133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.469180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.469489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.469528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.469905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.469920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.470088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.470101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.470551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.470570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.470772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.470784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.471020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.471032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.248 [2024-06-10 14:07:06.471208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.248 [2024-06-10 14:07:06.471220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.248 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.471463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.471475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.471700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.471713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.471881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.471893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.472112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.472124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.472340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.472353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.472684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.472697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.472827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.472840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.473000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.473012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.473275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.473314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.473612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.473653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.475912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.475936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.476257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.476271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.476463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.476490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.476841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.476882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.477100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.477140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.477435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.477475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.477721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.477733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.477959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.477971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.478151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.478163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.478377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.478417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.478731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.478772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.479102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.479142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.479513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.479552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.479805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.479845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.480096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.480135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.480431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.480470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.480787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.480828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.481177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.481216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.481589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.481639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.481816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.481828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.482048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.482060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.482314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.482353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.482601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.482649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.482880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.482920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.483283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.483323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.483673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.483714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.483951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.483991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.249 [2024-06-10 14:07:06.484231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.249 [2024-06-10 14:07:06.484271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.249 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.484509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.484548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.484748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.484761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.484989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.485001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.485267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.485305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.485549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.485601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.485810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.485822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.486103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.486115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.486359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.486371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.486557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.486569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.486814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.486826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.486994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.487006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.487243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.487282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.487596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.487637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.487928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.487941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.488115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.488127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.488398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.488410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.488658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.488699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.488979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.489018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.489311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.489350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.489590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.489631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.489924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.489936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.490240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.490280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.490523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.490562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.490811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.490823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.490989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.491001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.491218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.491230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.491390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.491402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.491604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.491645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.491949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.491988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.492262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.492301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.492522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.492561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.250 [2024-06-10 14:07:06.492801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.250 [2024-06-10 14:07:06.492841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.250 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.493167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.493205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.493499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.493510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.493739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.493754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.493928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.493968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.494307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.494347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.494689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.494702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.494878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.494890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.495057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.495109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.495336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.495376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.495668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.495707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.495895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.495907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.496060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.496072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.496301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.496314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.496556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.496568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.496766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.496779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.496979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.496991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.497229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.497241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.497395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.497407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.497563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.497574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.497744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.497756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.497914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.497926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.498145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.498157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.498341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.498352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.498570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.498588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.498787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.498827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.499112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.499151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.499369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.499408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.500966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.500989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.501174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.501186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.501433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.501444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.502745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.502766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.503046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.503059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.503216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.503228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.503394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.503405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.503643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.503678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.503897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.251 [2024-06-10 14:07:06.503936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.251 qpair failed and we were unable to recover it. 00:38:52.251 [2024-06-10 14:07:06.504237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.504276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.504505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.504544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.504774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.504814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.505028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.505067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.505348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.505387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.505763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.505803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.506028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.506075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.506373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.506413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.506759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.506799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.507177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.507216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.507518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.507557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.507805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.507845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.508133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.508173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.508450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.508498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.508655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.508666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.508881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.508893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.509140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.509151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.509308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.509320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.509542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.509553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.509758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.509799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.510047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.510087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.510404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.510443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.510699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.510740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.511082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.511093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.511349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.511360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.511461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.511472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.511716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.511728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.511874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.511886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.512162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.512202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.512420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.512432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.512719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.512731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.512881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.512894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.513111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.513123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.513354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.513365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.513701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.513741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.514040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.514078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.514304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.514343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.252 qpair failed and we were unable to recover it. 00:38:52.252 [2024-06-10 14:07:06.514660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.252 [2024-06-10 14:07:06.514673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.514851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.514863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.515163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.515202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.515412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.515451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.515746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.515758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.516073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.516084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.516247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.516259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.516418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.516429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.516643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.516660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.516889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.516905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.517072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.517086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.517280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.517294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.517528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.517542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.517770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.517783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.518017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.518029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.518189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.518201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.518435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.518447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.518754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.518767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.518985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.518997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.519240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.519251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.519427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.519439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.519590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.519602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.519856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.519867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.520036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.520048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.520288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.520300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.520517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.520529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.520823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.520835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.520987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.520998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.521124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.521136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.521374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.521385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.521630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.521643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.521888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.521900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.522205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.522228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.522389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.522401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.522552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.522564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.522826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.522866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.523103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.523144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.253 [2024-06-10 14:07:06.523374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.253 [2024-06-10 14:07:06.523413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.253 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.523651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.523691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.524060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.524099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.524447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.524486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.524791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.524803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.525081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.525093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.525340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.525379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.525747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.525788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.526043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.526055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.526305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.526317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.526623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.526635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.526810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.526822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.527056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.527071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.527264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.527303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.527598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.527638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.528003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.528042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.528333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.528377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.528495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.528507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.528813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.528826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.528996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.529008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.529225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.529237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.529541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.529553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.529786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.529799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.530031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.530043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.530205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.530217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.530484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.530523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.530930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.530971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.531354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.531394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.531761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.531802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.532013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.532052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.532286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.532325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.532601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.532640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.533020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.533032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.533205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.533216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.533509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.533520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.533757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.533769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.534023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.534035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.534230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.254 [2024-06-10 14:07:06.534242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.254 qpair failed and we were unable to recover it. 00:38:52.254 [2024-06-10 14:07:06.534525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.534537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.534775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.534788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.535096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.535128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.535422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.535461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.535828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.535869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.536147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.536188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.536553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.536608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.536925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.536965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.537261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.537300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.537626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.537639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.537818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.537831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.537995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.538007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.538270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.538308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.538703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.538744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.539084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.539097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.539250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.539261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.539507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.539519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.539753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.539766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.539984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.539996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.540175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.540206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.540559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.540607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.540975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.541015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.541382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.541421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.541712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.541752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.542020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.542060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.542337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.542376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.542599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.542640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.542865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.542907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.543131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.543143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.543296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.543308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.543550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.543563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.543757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.255 [2024-06-10 14:07:06.543769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.255 qpair failed and we were unable to recover it. 00:38:52.255 [2024-06-10 14:07:06.544055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.544066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.544303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.544315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.544552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.544564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.544683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.544695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.544979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.544991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.545300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.545312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.545618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.545646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.545864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.545887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.546109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.546121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.546361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.546373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.546673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.546685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.546972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.546984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.547222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.547234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.547381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.547393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.547568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.547624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.547864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.547904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.548186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.548225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.548453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.548493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.548798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.548838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.549121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.549160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.549389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.549438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.549675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.549687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.549912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.549926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.550156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.550168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.550391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.550403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.550586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.550598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.550765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.550787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.551030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.551071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.551315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.551354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.551587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.551628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.551909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.551949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.552173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.552213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.552591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.552631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.552942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.552982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.553213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.553252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.553476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.553515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.553763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.553804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.256 [2024-06-10 14:07:06.554100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.256 [2024-06-10 14:07:06.554112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.256 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.554416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.554428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.554648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.554660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.554827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.554839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.555063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.555102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.555326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.555366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.555660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.555700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.555981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.556020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.556251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.556289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.556514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.556553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.556799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.556839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.557053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.557065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.557292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.557304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.557523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.557535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.557760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.557773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.558002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.558041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.558269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.558308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.558527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.558566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.558878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.558918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.559144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.559184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.559484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.559522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.559823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.559864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.560211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.560250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.560554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.560607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.560830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.560870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.561065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.561079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.561319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.561330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.561563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.561581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.561764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.561776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.562088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.562100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.562285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.562296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.562484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.562496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.562716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.562729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.562904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.562915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.563152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.563164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.563333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.563346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.563494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.563506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.257 [2024-06-10 14:07:06.563749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.257 [2024-06-10 14:07:06.563761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.257 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.563924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.563936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.564091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.564103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.564342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.564354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.564523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.564535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.564721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.564733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.564896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.564909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.565124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.565136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.565359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.565372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.565604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.565616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.565765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.565777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.566022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.566034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.566345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.566357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.566667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.566680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.566848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.566860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.567079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.567091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.567326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.567338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.567514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.567525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.567753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.567766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.567985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.567998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.568220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.568233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.568534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.568546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.568742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.568754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.569067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.569079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.569330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.569343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.569511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.569523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.569672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.569684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.569858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.569870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.570157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.570171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.570408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.570420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.570585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.570597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.570883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.570896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.571137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.571150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.571302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.571315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.571533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.571545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.571779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.571793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.572104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.572116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.258 [2024-06-10 14:07:06.572402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.258 [2024-06-10 14:07:06.572414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.258 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.572670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.572683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.572867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.572880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.573032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.573044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.573275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.573287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.573478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.573490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.573722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.573735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.573906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.573918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.574135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.574147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.574434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.574446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.574680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.574692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.575010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.575022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.575255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.575267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.575566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.575592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.575812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.575824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.576009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.576021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.576240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.576251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.576476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.576488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.576729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.576742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.576969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.576981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.577225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.577236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.577416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.577428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.577658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.577671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.577893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.577905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.578206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.578218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.578434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.578446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.578669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.578681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.578847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.578860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.579025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.579036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.579333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.579345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.579642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.579655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.579892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.259 [2024-06-10 14:07:06.579906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.259 qpair failed and we were unable to recover it. 00:38:52.259 [2024-06-10 14:07:06.580058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.580070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.580303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.580314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.580559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.580571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.580833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.580846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.581035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.581047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.581269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.581281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.581451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.581464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.581646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.581658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.581946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.581958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.582187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.582199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.582433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.582446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.582759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.582771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.583011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.583023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.583196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.583209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.583436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.583448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.583625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.583638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.583877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.583888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.584048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.584060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.584274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.584286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.584439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.584451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.584693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.584706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.584942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.584954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.585267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.585279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.585516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.585528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.585834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.585846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.586065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.586078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.586239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.586251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.586489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.586501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.586653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.586666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.586951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.586963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.587273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.587285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.587592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.587605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.587822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.587834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.588003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.588016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.588314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.588326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.588496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.588508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.588745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.260 [2024-06-10 14:07:06.588758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.260 qpair failed and we were unable to recover it. 00:38:52.260 [2024-06-10 14:07:06.589023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.589035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.589284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.589296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.589540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.589554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.589730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.589743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.589982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.589994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.590224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.590236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.590472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.590484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.590640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.590653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.590884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.590896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.591135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.591147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.591318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.591330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.591640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.591652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.591884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.591896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.592119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.592131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.592228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.592240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.592544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.592556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.592866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.592879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.593006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.593019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.593257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.593269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.593504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.593516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.593841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.593853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.594158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.594171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.594333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.594345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.594586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.594598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.594754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.594766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.595017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.595029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.595268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.595281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.595608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.595621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.595840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.595852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.596144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.596158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.596388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.596400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.596704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.596717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.596898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.596910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.597061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.597074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.597239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.597251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.597564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.597584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.597794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.597806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.261 [2024-06-10 14:07:06.598036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.261 [2024-06-10 14:07:06.598049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.261 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.598209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.598222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.598551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.598564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.598761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.598773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.598952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.598964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.599207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.599220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.599398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.599411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.599588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.599600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.599843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.599855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.600074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.600086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.600323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.600335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.600503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.600515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.600754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.600766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.600996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.601008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.601229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.601243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.601551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.601563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.601793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.601805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.602059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.602071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.602290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.602302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.602460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.602472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.602701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.602714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.602953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.602965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.603145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.603157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.603466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.603478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.603715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.603734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.603905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.603917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.604167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.604180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.604277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.604289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.604466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.604478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.604703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.604716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.604890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.604902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.605132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.605144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.605360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.605376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.605493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.605505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.605658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.605671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.605958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.605970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.606198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.606209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.606363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.606374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.262 qpair failed and we were unable to recover it. 00:38:52.262 [2024-06-10 14:07:06.606540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.262 [2024-06-10 14:07:06.606551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.606746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.606757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.606987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.607000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.607217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.607229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.607447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.607460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.607680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.607696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.607927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.607939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.608087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.608098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.608316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.608328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.608569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.608591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.608739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.608751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.608905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.608917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.609079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.609091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.609263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.609274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.609558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.609570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.609757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.609769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.610012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.610024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.610187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.610199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.610415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.610426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.610589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.610601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.610772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.610784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.610949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.610960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.611178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.611189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.611409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.611420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.611635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.611647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.611812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.611823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.612107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.612119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.612265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.612276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.612444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.612455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.612790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.612802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.612962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.612974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.613197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.613209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.613518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.613531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.613752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.613769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.614080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.614096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.614288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.614299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.614517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.614529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.614783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.263 [2024-06-10 14:07:06.614795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.263 qpair failed and we were unable to recover it. 00:38:52.263 [2024-06-10 14:07:06.614955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.614966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.615225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.615237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.615392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.615403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.615569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.615588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.615822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.615834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.616058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.616070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.616355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.616367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.616535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.616547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.616784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.616796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.616946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.616958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.617109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.617120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.617411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.617423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.617669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.617681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.617915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.617927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.618173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.618185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.618428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.618440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.618653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.618665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.618895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.618907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.619166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.619178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.619435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.619447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.619680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.619692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.619952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.619965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.620187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.620198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.620441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.620453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.620643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.620655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.620837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.620849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.621081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.621092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.621318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.621330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.621444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.621455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.621741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.621754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.621929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.621941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.622172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.264 [2024-06-10 14:07:06.622184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.264 qpair failed and we were unable to recover it. 00:38:52.264 [2024-06-10 14:07:06.622419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.622431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.622613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.622625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.622941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.622954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.623199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.623210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.623436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.623450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.623762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.623774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.624082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.624094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.624244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.624255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.624528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.624540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.624706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.624718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.624950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.624962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.625140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.625151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.625399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.625411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.625650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.625663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.625841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.625853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.626089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.626101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.626387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.626399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.626627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.626639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.626860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.626872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.627159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.627171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.627359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.627370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.627616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.627628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.627774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.627785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.628001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.628013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.628277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.628289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.628620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.628633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.628795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.628806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.628989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.629001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.629181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.629193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.629380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.629392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.629620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.629632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.629793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.629805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.630038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.630050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.630213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.630225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.630444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.630455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.630667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.630679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.630966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.630980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.265 qpair failed and we were unable to recover it. 00:38:52.265 [2024-06-10 14:07:06.631227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.265 [2024-06-10 14:07:06.631239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.631524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.631536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.631751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.631764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.631983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.631995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.632154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.632166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.632390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.632405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.632632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.632644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.632868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.632883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.633050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.633062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.633298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.633310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.633481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.633494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.633656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.633668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.633842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.633854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.634073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.634085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.634330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.634342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.634560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.634572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.634798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.634810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.635036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.635048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.635285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.635297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.635585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.635597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.635843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.635855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.636037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.636050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.636369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.636381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.636673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.636686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.636918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.636930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.637110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.637121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.637461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.637473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.637600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.637612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.637778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.637790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.637969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.637980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.638155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.638167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.638338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.638351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.638535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.638547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.638727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.638739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.638908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.638920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.639142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.639154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.639351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.266 [2024-06-10 14:07:06.639363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.266 qpair failed and we were unable to recover it. 00:38:52.266 [2024-06-10 14:07:06.639672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.639684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.639907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.639919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.640099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.640111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.640390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.640403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.640559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.640571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.640832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.640845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.641132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.641145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.641362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.641374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.641596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.641609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.641818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.641830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.641996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.642010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.642243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.642255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.642419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.642431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.642654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.642666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.642950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.642962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.643191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.643203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.643381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.643393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.643556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.643568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.643839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.643851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.644000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.644013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.644219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.644231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.644516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.644528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.644650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.644663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.644759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.644771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.645015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.645122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.645312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.645503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.645675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.645835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.645998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.646010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.646248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.646260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.646477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.646489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.646686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.646698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.646861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.646873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.647066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.647078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.267 [2024-06-10 14:07:06.647339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.267 [2024-06-10 14:07:06.647352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.267 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.647542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.647554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.647781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.647793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.647968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.647980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.648266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.648278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.648491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.648503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.648664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.648676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.648929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.648941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.649171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.649183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.649416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.649428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.649731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.649745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.649920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.649932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.650084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.650097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.650271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.650283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.650457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.650473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.650632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.650645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.650807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.650819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.651015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.651027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.651201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.651213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.651377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.651390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.651561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.651582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.651814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.651827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.652014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.652026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.652244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.652255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.652475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.652488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.652670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.652683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.652854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.652866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.653019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.653032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.653179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.653191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.653407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.653420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.653592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.653606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.653764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.653776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.653945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.653957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.654110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.654123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.654320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.654332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.654506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.654518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.654675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.654687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.654905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.268 [2024-06-10 14:07:06.654918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.268 qpair failed and we were unable to recover it. 00:38:52.268 [2024-06-10 14:07:06.655089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.655102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.655282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.655294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.655528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.655541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.655722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.655734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.655961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.655974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.656200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.656213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.656435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.656448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.656646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.656658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.656814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.656826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.657009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.657021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.657182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.657195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.657503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.657515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.657621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.657633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.657857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.657869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.658088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.658100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.658337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.658350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.658593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.658608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.658787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.658799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.658948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.658961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.659199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.659211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.659427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.659439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.659636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.659649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.659935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.659947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.660231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.660248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.660380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.660392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.660612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.660624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.660904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.660917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.661161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.661173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.661446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.661459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.661687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.661700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.661940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.661953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.269 [2024-06-10 14:07:06.662171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.269 [2024-06-10 14:07:06.662184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.269 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.662396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.662408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.662591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.662603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.662823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.662835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.663054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.663067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.663220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.663232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.663448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.663460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.663746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.663759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.663929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.663941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.664177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.664189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.664440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.664452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.664737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.664750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.664988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.665004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.665297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.665309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.665491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.665503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.665736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.665749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.665931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.665943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.666161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.666173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.666402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.666414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.666715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.666728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.666965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.666977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.667197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.667209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.667430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.667442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.667753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.667766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.667986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.667999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.668223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.668238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.668420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.668432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.668670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.668682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.668989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.669002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.669240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.669253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.669404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.669422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.669650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.669662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.669948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.669964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.670202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.670214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.670433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.670446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.670618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.670631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.670947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.670959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.270 [2024-06-10 14:07:06.671135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.270 [2024-06-10 14:07:06.671147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.270 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.671320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.671332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.671507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.671519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.671710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.671722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.672031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.672042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.672283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.672295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.672610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.672622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.672844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.672856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.673093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.673105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.673411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.673423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.673654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.673666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.673906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.673918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.674138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.674150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.674385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.674397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.674642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.674654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.674942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.674954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.675242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.675254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.675491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.675504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.675791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.675804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.676052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.676065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.676426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.676439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.676662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.676675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.676963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.676976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.677167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.677181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.677401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.677414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.677587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.677599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.677816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.677828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.678048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.678060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.678277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.678293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.678511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.678523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.678791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.271 [2024-06-10 14:07:06.678804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.271 qpair failed and we were unable to recover it. 00:38:52.271 [2024-06-10 14:07:06.679094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.679107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.679385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.679397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.679554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.679567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.679771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.679784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.680043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.680055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.680274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.680287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.680527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.680539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.680853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.680866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.681152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.681165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.681409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.681422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.681754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.681767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.682026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.682038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.682324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.682336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.682508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.682520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.682756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.682768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.682938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.682950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.683168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.683181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.683426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.683438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.683656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.683668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.683952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.683964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.684145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.684156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.684322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.684334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.684580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.684592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.684838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.684850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.685141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.685153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.685427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.685439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.685711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.685724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.685913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.685927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.686214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.686226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.686394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.686406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.686636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.686649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.686806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.686818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.687051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.687064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.687289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.687301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.687453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.687466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.687754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.687768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.272 [2024-06-10 14:07:06.687999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.272 [2024-06-10 14:07:06.688012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.272 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.688277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.688291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.688573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.688591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.688905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.688918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.689220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.689233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.689500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.689512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.689803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.689817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.690133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.690153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.690267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.690285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.690401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.690417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.690720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.690736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.691015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.691028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.691292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.691303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.691545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.691556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.691787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.691799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.692103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.692115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.692356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.692372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.692624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.692638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.692859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.692871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.693087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.693098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.693415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.693454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.693685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.693726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.694090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.694105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.694269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.694281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.694449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.694465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.273 [2024-06-10 14:07:06.694691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.273 [2024-06-10 14:07:06.694705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.273 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.694887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.694899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.695156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.695167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.695448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.695460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.695723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.695735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.695914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.695926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.696216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.696227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.696534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.696546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.696785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.696797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.697097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.697109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.697361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.697372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.551 [2024-06-10 14:07:06.697590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.551 [2024-06-10 14:07:06.697602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.551 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.697910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.697922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.698149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.698160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.698397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.698408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.698574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.698597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.698837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.698853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.699145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.699185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.699570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.699627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.699996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.700036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.700407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.700447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.700736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.700776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.701048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.701060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.701186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.701198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.701489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.701528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.701882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.701915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.702237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.702277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.702508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.702548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.702857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.702898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.703231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.703243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.703425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.703437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.703671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.703712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.704012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.704024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.704258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.704270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.704520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.704532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.704790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.704803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.705128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.705168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.705456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.705496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.705800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.705812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.706035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.706047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.706277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.706289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.706513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.706552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.707220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.707236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.707553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.707565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.707825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.707838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.708124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.708135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.708393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.708405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.708553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.708565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.552 [2024-06-10 14:07:06.708709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.552 [2024-06-10 14:07:06.708720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.552 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.708891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.708903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.709159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.709171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.709480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.709492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.709726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.709738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.709909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.709949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.710227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.710266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.710615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.710655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.710884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.710929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.711230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.711269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.711567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.711627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.711854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.711893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.712258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.712298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.712644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.712685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.713059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.713098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.713422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.713433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.713664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.713676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.713893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.713904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.714137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.714148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.714272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.714284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.714543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.714591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.714876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.714916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.715226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.715238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.715411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.715423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.715674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.715686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.715866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.715878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.716137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.716176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.716351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.716391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.716734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.716785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.717103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.717149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.717488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.717502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.717813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.717826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.718125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.718164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.718400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.718439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.718681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.718723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.719010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.719049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.719421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.719461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.719751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.553 [2024-06-10 14:07:06.719791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.553 qpair failed and we were unable to recover it. 00:38:52.553 [2024-06-10 14:07:06.719997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.720009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.720206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.720228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.720525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.720536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.720766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.720779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.721083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.721095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.721405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.721417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.721584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.721596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.721851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.721863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.722098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.722137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.722498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.722537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.722834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.722874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.723257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.723297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.723596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.723637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.723933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.723972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.724333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.724372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.724705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.724746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.725142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.725181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.725376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.725388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.725629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.725641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.725857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.725870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.726088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.726101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.726368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.726408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.726783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.726823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.727131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.727142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.727374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.727387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.727630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.727642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.727946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.727958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.728173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.728185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.728490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.728502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.728831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.728854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.729018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.729029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.729262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.729274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.729560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.729571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.729901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.729948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.730227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.730267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.730550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.730597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.554 [2024-06-10 14:07:06.730890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.554 [2024-06-10 14:07:06.730930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.554 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.731298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.731343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.731713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.731753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.732132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.732172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.732469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.732508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.732836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.732877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.733169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.733207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.733510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.733550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.733786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.733826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.734135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.734147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.734366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.734378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.734672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.734684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.734991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.735003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.735302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.735341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.735689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.735729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.736028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.736068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.736394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.736406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.736692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.736721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.737070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.737109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.737456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.737495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.737793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.737833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.738075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.738087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.738318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.738331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.738499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.738511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.738730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.738742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.738964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.738976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.739194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.739206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.739490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.739501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.739736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.739748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.740138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.740177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.740488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.740500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.740660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.740673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.740979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.740991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.741241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.741253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.741405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.741417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.741656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.741696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.741908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.741947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.742175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.742215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.555 [2024-06-10 14:07:06.742490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.555 [2024-06-10 14:07:06.742529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.555 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.742752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.742793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.743140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.743179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.743529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.743574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.743951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.743990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.744317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.744329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.744619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.744660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.745035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.745074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.745442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.745482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.745771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.745817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.746102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.746141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.746452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.746469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.746785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.746800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.747088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.747100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.747335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.747347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.747581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.747593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.747854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.747867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.748039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.748051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.748307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.748319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.748641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.748653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.748875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.748887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.749172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.749214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.749452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.749491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.749775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.749815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.750053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.750091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.750398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.750410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.750697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.750709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.751039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.751079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.751374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.751413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.751734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.751774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.751963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.556 [2024-06-10 14:07:06.752002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.556 qpair failed and we were unable to recover it. 00:38:52.556 [2024-06-10 14:07:06.752300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.752339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.752630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.752671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.753019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.753058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.753407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.753447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.753726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.753766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.754135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.754175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.754433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.754472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.754758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.754799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.755040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.755079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.755468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.755507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.755797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.755838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.756143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.756182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.756530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.756588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.756939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.756979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.757263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.757275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.757579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.757591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.757924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.757936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.758159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.758171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.758474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.758486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.758792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.758804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.759037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.759049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.759352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.759363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.759582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.759594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.759855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.759867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.760197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.760219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.760549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.760607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.760910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.760950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.761314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.761353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.761732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.761773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.762073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.762111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.762359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.762371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.762679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.762691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.762907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.762919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.763147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.763159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.763376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.763388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.763701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.763713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.763937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.763949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.764254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.557 [2024-06-10 14:07:06.764266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.557 qpair failed and we were unable to recover it. 00:38:52.557 [2024-06-10 14:07:06.764501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.764512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.764758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.764770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.765057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.765069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.765355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.765367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.765601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.765614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.765925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.765936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.766242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.766282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.766659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.766699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.767072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.767111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.767475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.767487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.767800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.767840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.768205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.768243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.768481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.768493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.768778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.768790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.769097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.769111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.769427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.769465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.769696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.769736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.770116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.770155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.770442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.770481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.770797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.770838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.771219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.771257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.771490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.771528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.771769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.771809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.772019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.772057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.772346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.772386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.772709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.772722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.772957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.772968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.773269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.773281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.773518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.773530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.773784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.773796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.774112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.774124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.774361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.774373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.774697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.774737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.775044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.775083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.775469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.775508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.775864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.775904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.776259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.776298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.776691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.558 [2024-06-10 14:07:06.776732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.558 qpair failed and we were unable to recover it. 00:38:52.558 [2024-06-10 14:07:06.777027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.777066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.777438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.777483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.777667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.777679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.777897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.777909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.778147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.778159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.778386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.778398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.778705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.778717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.779023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.779035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.779282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.779293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.779550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.779562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.779756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.779768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.779953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.779993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.780369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.780408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.780672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.780684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.780916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.780928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.781156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.781168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.781398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.781443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.781728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.781768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.782116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.782156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.782511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.782550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.782949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.782990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.783343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.783382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.783755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.783794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.784091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.784130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.784478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.784517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.784845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.784885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.785200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.785239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.785595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.785635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.786000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.786050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.786281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.786292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.786512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.786524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.786758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.786770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.786999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.787011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.787241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.787253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.787537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.787549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.787858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.787871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.788108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.788120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.559 [2024-06-10 14:07:06.788411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.559 [2024-06-10 14:07:06.788422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.559 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.788570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.788591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.788757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.788769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.789076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.789088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.789405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.789444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.789797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.789838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.790139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.790151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.790364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.790376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.790612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.790624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.790939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.790977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.791285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.791323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.791568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.791620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.791927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.791966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.792298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.792309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.792615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.792627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.792884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.792895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.793148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.793159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.793451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.793463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.793679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.793691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.794019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.794033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.794286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.794297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.794529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.794541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.794760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.794773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.795010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.795023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.795316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.795328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.795580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.795592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.795831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.795843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.796087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.796099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.796329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.796341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.796675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.796687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.796924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.796935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.797244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.797255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.797552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.797564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.797797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.797809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.560 qpair failed and we were unable to recover it. 00:38:52.560 [2024-06-10 14:07:06.797996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.560 [2024-06-10 14:07:06.798008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.798174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.798186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.798410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.798439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.798798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.798838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.799141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.799181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.799546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.799594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.799964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.800003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.800274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.800287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.800570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.800587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.800873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.800885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.801183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.801195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.801429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.801441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.801671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.801683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.801908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.801920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.802140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.802152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.802437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.802449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.802693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.802705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.802921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.802933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.803177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.803189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.803492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.803504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.803834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.803874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.804241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.804280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.804561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.804574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.804892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.804904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.805083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.805095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.805417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.805431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.805673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.805686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.805969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.805980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.806217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.806229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.806482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.806495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.806750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.806762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.807020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.807032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.807337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.807389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.807622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.807662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.808035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.808074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.808377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.808416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.808808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.561 [2024-06-10 14:07:06.808848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.561 qpair failed and we were unable to recover it. 00:38:52.561 [2024-06-10 14:07:06.809247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.809288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.809627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.809639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.810020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.810059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.810439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.810479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.810773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.810814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.811163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.811201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.811548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.811596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.811988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.812027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.812389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.812429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.812700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.812712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.813045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.813057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.813276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.813288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.813609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.813621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.813882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.813894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.814145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.814157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.814393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.814406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.814636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.814648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.814871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.814883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.815174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.815186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.815494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.815506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.815823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.815863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.816149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.816189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.816466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.816505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.816860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.816899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.817268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.817307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.817696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.817737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.818035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.818074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.818347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.818359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.818614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.818628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.818868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.818880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.819167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.819179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.819463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.819474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.819712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.819724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.820036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.820048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.820289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.820301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.820622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.820634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.820935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.820948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.562 qpair failed and we were unable to recover it. 00:38:52.562 [2024-06-10 14:07:06.821231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.562 [2024-06-10 14:07:06.821243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.821494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.821506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.821742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.821755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.822082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.822094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.822324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.822336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.822572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.822589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.822873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.822885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.823170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.823182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.823430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.823442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.823677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.823689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.823976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.823988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.824211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.824223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.824392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.824404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.824714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.824726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.824911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.824923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.825200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.825212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.825392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.825430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.825659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.825699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.826127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.826205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.826479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.826522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.826864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.826905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.827254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.827294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.827661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.827702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.828102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.828141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.828511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.828550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.828928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.828968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.829264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.829303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.829673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.829712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.830058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.830097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.830417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.830456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.830828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.830868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.831232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.831281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.831527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.831566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.831817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.831857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.832228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.832267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.832653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.832693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.832988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.833027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.563 [2024-06-10 14:07:06.833374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.563 [2024-06-10 14:07:06.833413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.563 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.833766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.833806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.834105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.834143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.834530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.834569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.834880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.834920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.835149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.835188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.835485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.835524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.835904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.835945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.836326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.836365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7858000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.836608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.836636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.836832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.836845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.837153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.837165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.837340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.837352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.837585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.837600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.837857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.837870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.838101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.838113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.838337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.838349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.838585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.838598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.838885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.838899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.839158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.839172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.839481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.839493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.839735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.839747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.839983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.839995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.840175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.840187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.840494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.840506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.840671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.840683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.840993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.841004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.841195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.841207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.841473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.841485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.841792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.841804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.842060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.842072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.842329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.842341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.842690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.842704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.842973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.842985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.843203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.843218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.843439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.843451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.843736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.843748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.844056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.844068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.564 qpair failed and we were unable to recover it. 00:38:52.564 [2024-06-10 14:07:06.844307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.564 [2024-06-10 14:07:06.844320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.844488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.844500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.844662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.844674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.844903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.844915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.845147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.845159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.845374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.845387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.845607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.845620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.845803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.845815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.846046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.846058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.846346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.846358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.846668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.846681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.846858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.846870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.847181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.847193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.847414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.847425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.847648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.847661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.847944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.847956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.848261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.848273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.848510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.848522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.848855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.848867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.849171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.849183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.849416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.849428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.849737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.849749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.849980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.849992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.850232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.850246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.850532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.850544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.850848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.850860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.851083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.851095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.851353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.851365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.851594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.851606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.851831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.851843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.852021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.852033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.852253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.852265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.565 [2024-06-10 14:07:06.852573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.565 [2024-06-10 14:07:06.852591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.565 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.852898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.852910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.853204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.853216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.853538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.853550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.853782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.853800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.853988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.854000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.854256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.854269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.854450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.854463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.854700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.854714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.855004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.855019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.855329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.855344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.855600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.855616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.855841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.855854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.856144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.856157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.856398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.856411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.856719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.856732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.856964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.856976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.857154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.857167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.857335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.857347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.857637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.857650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.857815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.857828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.858145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.858158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.858392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.858404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.858635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.858649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.858961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.858974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.859224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.859236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.859497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.859509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.859691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.859707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.859993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.860008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.860191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.860208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.860450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.860463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.860752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.860766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.860936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.860949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.861180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.861193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.861348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.861362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.861667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.861680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.861990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.862002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.862112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.566 [2024-06-10 14:07:06.862126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.566 qpair failed and we were unable to recover it. 00:38:52.566 [2024-06-10 14:07:06.862364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.862376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.862559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.862571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.862864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.862876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.863043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.863054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.863297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.863309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.863543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.863555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.863792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.863809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.864117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.864130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.864284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.864297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.864471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.864483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.864791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.864804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.864987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.864999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.865150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.865165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.865327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.865342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.865524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.865538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.865829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.865844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.865946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.865958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.866145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.866158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.866445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.866458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.866631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.866646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.866806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.866818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.867041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.867055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.867346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.867358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.867668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.867681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.868012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.868025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.868267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.868279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.868498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.868512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.868757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.868770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.869078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.869090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.869327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.869340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.869671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.869685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.869924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.869937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.870175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.870187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.870368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.870381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.870573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.870591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.870775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.870788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.871091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.871103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.567 [2024-06-10 14:07:06.871361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.567 [2024-06-10 14:07:06.871374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.567 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.871610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.871623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.871911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.871923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.872144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.872157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.872349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.872361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.872646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.872658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.872990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.873003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.873102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.873114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.873415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.873428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.873681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.873694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.873876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.873889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.874105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.874119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.874335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.874348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.874592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.874605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.874908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.874921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.875098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.875110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.875385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.875398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.875700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.875713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.876018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.876031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.876244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.876257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.876560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.876573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.876865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.876878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.877108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.877121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.877357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.877371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.877605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.877618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.877850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.877862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.878170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.878182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.878461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.878473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.878641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.878653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.878910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.878921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.879084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.879096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.879272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.879284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.879470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.879482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.879810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.879822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.880060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.880073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.880252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.880264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.880573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.880599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.880776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.880788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.568 qpair failed and we were unable to recover it. 00:38:52.568 [2024-06-10 14:07:06.881026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.568 [2024-06-10 14:07:06.881038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.881285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.881297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.881535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.881548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.881726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.881739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.881925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.881937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.882040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.882052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.882285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.882297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.882539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.882552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.882706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.882719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.882964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.882977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.883275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.883287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.883509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.883521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.883836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.883848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.884081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.884094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.884277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.884289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.884606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.884618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.884859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.884871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.885155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.885168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.885340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.885352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.885658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.885671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.885906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.885918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.886210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.886222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.886456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.886468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.886771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.886784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.887116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.887128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.887438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.887450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.887683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.887695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.887994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.888006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.888336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.888348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.888582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.888594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.888770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.888782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.889093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.889105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.889290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.889301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.889529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.889541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.889763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.889775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.890031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.890043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.890213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.890225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.890550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.569 [2024-06-10 14:07:06.890562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.569 qpair failed and we were unable to recover it. 00:38:52.569 [2024-06-10 14:07:06.890852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.890866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.891047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.891058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.891384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.891396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.891705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.891718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.891999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.892011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.892128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.892139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.892374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.892385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.892702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.892742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.893041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.893080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.893394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.893433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.893736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.893748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.894046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.894058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.894253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.894264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.894600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.894612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.894904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.894944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.895339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.895377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.895746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.895786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.896102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.896142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.896450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.896488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.896836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.896876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.897226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.897264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.897566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.897614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.897978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.898017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.898363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.898402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.898783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.898796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.899037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.899049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.899364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.899403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.899706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.899746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.900067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.900080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.900320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.900331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.900506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.900517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.900810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.900850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.570 qpair failed and we were unable to recover it. 00:38:52.570 [2024-06-10 14:07:06.901219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.570 [2024-06-10 14:07:06.901258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.901561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.901573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.901824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.901836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.902122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.902134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.902371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.902383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.902689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.902701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.902933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.902946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.903169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.903181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.903502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.903515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.903799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.903811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.904095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.904107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.904389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.904400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.904736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.904748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.904982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.904994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.905280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.905291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.905590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.905618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.905852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.905864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.906150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.906162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.906459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.906471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.906727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.906740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.906986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.906998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.907218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.907229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.907563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.907581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.907840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.907852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.908000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.908012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.908253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.908292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.908616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.908656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.908984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.908996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.909202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.909214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.909535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.909547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.909782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.909794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.910024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.910036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.910249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.910261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.910562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.910574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.910892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.910904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.911096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.911108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.911419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.911431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.571 [2024-06-10 14:07:06.911691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.571 [2024-06-10 14:07:06.911703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.571 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.911855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.911867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.912159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.912198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.912479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.912518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.912892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.912904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.913258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.913298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.913599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.913640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.913988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.914027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.914273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.914311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.914636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.914676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.915021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.915060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.915442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.915487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.915716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.915735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.915954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.915966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.916195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.916244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.916525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.916564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.916871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.916886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.917181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.917196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.917521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.917533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.917768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.917781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.917944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.917956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.918241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.918253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.918548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.918560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.918874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.918887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.919144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.919156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.919398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.919410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.919655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.919667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.919839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.919851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.920048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.920087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.920460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.920500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.920792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.920804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.921088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.921100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.921359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.921371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.921606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.921618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.572 [2024-06-10 14:07:06.921899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.572 [2024-06-10 14:07:06.921911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.572 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.922074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.922086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.922306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.922318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.922552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.922613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.922852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.922892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.923114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.923152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.923513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.923552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.923936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.923976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.924339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.924378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.924737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.924777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.925148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.925187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.925501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.925513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.925807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.925819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.925946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.925958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.926286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.926326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.926621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.926663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.926990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.927002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.927330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.927375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.927724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.927765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.928056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.928095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.928483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.928522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.928890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.928931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.929226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.929266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.929445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.929484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.929849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.929890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.930206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.930245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.930471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.930511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.930702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.930714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.930937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.930949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.931287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.931326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.931695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.931736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.932050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.932062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.932385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.932425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.932715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.932755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.933077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.933117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.933466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.933505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.933823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.573 [2024-06-10 14:07:06.933836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.573 qpair failed and we were unable to recover it. 00:38:52.573 [2024-06-10 14:07:06.934052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.934064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.934349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.934361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.934619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.934631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.934794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.934806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.935123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.935163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.935509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.935548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.935896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.935908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.936193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.936233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.936592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.936632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.936910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.936922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.937179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.937191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.937375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.937387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.937694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.937706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.937920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.937931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.938264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.938276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.938597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.938610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.938774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.938786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.939042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.939054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.939372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.939384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.939675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.939714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.940037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.940082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.940458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.940497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.940805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.940846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.941214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.941253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.941670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.941710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.942094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.942133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.942504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.942543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.942928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.942968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.943337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.943376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.943736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.943770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.944141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.944180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.944524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.944563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.944964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.945003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.945367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.945406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.574 qpair failed and we were unable to recover it. 00:38:52.574 [2024-06-10 14:07:06.945705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.574 [2024-06-10 14:07:06.945717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.575 qpair failed and we were unable to recover it. 00:38:52.575 [2024-06-10 14:07:06.945966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.575 [2024-06-10 14:07:06.945978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.575 qpair failed and we were unable to recover it. 00:38:52.575 [2024-06-10 14:07:06.946258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.575 [2024-06-10 14:07:06.946270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.946511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.946523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.946833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.946845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.947111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.947150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.947518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.947558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.947885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.947924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.948203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.948242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.948536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.948587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.948868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.948907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.949242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.949281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.949635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.949675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.950031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.950071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.950366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.950406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.950727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.950768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.950940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.950952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.951249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.951261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.951544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.951556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.951817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.951829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.952135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.952146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.952389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.952401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.952714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.952727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.953049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.953060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.953241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.953252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.953483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.953495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.953709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.953724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.954061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.954073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.954318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.954358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.954648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.954688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.954954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.954966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.955182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.955194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.955426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.955438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.955610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.955622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.955788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.955800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.956115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.956127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.956444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.956483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.576 [2024-06-10 14:07:06.956729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.576 [2024-06-10 14:07:06.956769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.576 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.957058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.957097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.957376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.957416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.957715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.957727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.957964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.958003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.958240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.958279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.958650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.958690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.958995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.959034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.959327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.959367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.959672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.959712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.960019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.960058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.960428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.960480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.960710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.960722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.960940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.960952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.961239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.961251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.961484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.961496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.961810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.961822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.962053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.962065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.962252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.962264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.962552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.962619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.962986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.963026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.963371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.963410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.963776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.963816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.964115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.964155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.964531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.964570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.964928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.964967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.965192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.965232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.965514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.965553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.965939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.965979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.966330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.966376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.966667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.966708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.966946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.966985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.967373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.967412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.967778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.967791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.967976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.967988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.968212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.968250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.577 [2024-06-10 14:07:06.968540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.577 [2024-06-10 14:07:06.968588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.577 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.968944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.968995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.969280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.969319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.969560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.969610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.969888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.969928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.970296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.970335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.970683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.970695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.970943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.970983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.971262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.971301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.971595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.971636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.972019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.972058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.972402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.972438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.972689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.972701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.973034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.973056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.973226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.973238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.973485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.973496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.973754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.973766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.974045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.974084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.974401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.974440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.974616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.974628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.974867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.974907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.975202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.975241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.975469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.975508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.975882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.975923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.976206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.976218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.976526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.976538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.976919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.976958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.977237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.977276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.977655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.977695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.978041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.978080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.978441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.978480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.978721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.978734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.979079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.979091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.979397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.979443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.979816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.979856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.980223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.980262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.980563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.578 [2024-06-10 14:07:06.980629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.578 qpair failed and we were unable to recover it. 00:38:52.578 [2024-06-10 14:07:06.981031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.981071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.981366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.981405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.981754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.981766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.982026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.982066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.982425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.982465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.982694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.982707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.982888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.982900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.983245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.983284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.983586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.983625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.983906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.983946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.984308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.984348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.984634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.984674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.985028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.985067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.985435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.985474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.985796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.985836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.986205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.986245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.986650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.986701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.987017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.987057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.987404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.987443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.987754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.987766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.987999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.988011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.988228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.988239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.988533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.988545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.988842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.988854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.989085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.989097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.989339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.989351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.989635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.989647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.989886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.989898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.990218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.990257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.990557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.990624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.990983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.991023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.991198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.991237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.991468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.991508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.991825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.579 [2024-06-10 14:07:06.991838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.579 qpair failed and we were unable to recover it. 00:38:52.579 [2024-06-10 14:07:06.992068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.992080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.992296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.992308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.992647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.992687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.992866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.992905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.993287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.993326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.993646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.993686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.994031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.994061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.994431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.994470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.994748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.994760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.994984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.995023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.995324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.995363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.995728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.995768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.996095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.996134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.996427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.996467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.996814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.996854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.997173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.997212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.997532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.997572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.997896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.997935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.998300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.998340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.998736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.998756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.998996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.999008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.999242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.999254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.999538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.999550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:06.999720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:06.999733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.000040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.000051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.000338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.000350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.000594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.000611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.000940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.000953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.001267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.001279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.001516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.001528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.001771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.001784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.580 qpair failed and we were unable to recover it. 00:38:52.580 [2024-06-10 14:07:07.001960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.580 [2024-06-10 14:07:07.001972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.581 qpair failed and we were unable to recover it. 00:38:52.581 [2024-06-10 14:07:07.002257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.581 [2024-06-10 14:07:07.002269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.581 qpair failed and we were unable to recover it. 00:38:52.581 [2024-06-10 14:07:07.002502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.581 [2024-06-10 14:07:07.002517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.581 qpair failed and we were unable to recover it. 00:38:52.878 [2024-06-10 14:07:07.002807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.878 [2024-06-10 14:07:07.002820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.878 qpair failed and we were unable to recover it. 00:38:52.878 [2024-06-10 14:07:07.003065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.878 [2024-06-10 14:07:07.003077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.878 qpair failed and we were unable to recover it. 00:38:52.878 [2024-06-10 14:07:07.003363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.878 [2024-06-10 14:07:07.003375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.878 qpair failed and we were unable to recover it. 00:38:52.878 [2024-06-10 14:07:07.003554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.878 [2024-06-10 14:07:07.003566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.878 qpair failed and we were unable to recover it. 00:38:52.878 [2024-06-10 14:07:07.003675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.878 [2024-06-10 14:07:07.003688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.878 qpair failed and we were unable to recover it. 00:38:52.878 [2024-06-10 14:07:07.003784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.003795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.004035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.004047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.004212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.004224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.004483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.004498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.004765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.004778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.005088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.005101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.005335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.005352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.005602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.005617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.005783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.005798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.006129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.006143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.006451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.006463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.006716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.006729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.006962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.006974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.007278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.007290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.007626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.007638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.007881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.007899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.008172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.008191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.008447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.008466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.008794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.008813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.009077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.009095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.009398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.009416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.009740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.009760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.010104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.010124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.010372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.010390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.010653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.010671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.010993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.011013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.011357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.011375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.011708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.011727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.012076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.012094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.012349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.012367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.012669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.012689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.012984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.013003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.013322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.013341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.013662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.013682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.013934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.013952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.014198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.879 [2024-06-10 14:07:07.014216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.879 qpair failed and we were unable to recover it. 00:38:52.879 [2024-06-10 14:07:07.014518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.014536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.014828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.014851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.015106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.015121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.015341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.015356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.015541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.015555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.015782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.015795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.016026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.016039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.016345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.016360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.016646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.016659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.016904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.016916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.017146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.017158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.017339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.017351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.017529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.017541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.017788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.017801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.018108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.018120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.018373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.018386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.018558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.018570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.018818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.018830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.019140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.019153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.019437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.019450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.019758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.019777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.020015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.020027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.020264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.020276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.020585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.020598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.020908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.020920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.021153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.021165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.021446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.021459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.021674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.021687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.021972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.021984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.022222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.022234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.022547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.022559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.022829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.022842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.023158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.023198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.023497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.023536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.023938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.023979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.024282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.024321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.024719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.880 [2024-06-10 14:07:07.024760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.880 qpair failed and we were unable to recover it. 00:38:52.880 [2024-06-10 14:07:07.025134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.025173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.025404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.025444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.025807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.025848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.026222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.026261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.026625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.026666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.027033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.027073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.027448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.027487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.027859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.027898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.028262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.028302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.028602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.028643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.028995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.029034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.029392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.029432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.029654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.029695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.030015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.030054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.030404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.030444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.030814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.030855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.031226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.031266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.031503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.031543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.031933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.031974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.032343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.032382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.032667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.032680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.032805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.032817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.033118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.033157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.033552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.033608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.033989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.034029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.034402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.034441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.034763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.034804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.035169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.035208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.035518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.035557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.035947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.035988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.036295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.036334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.036614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.036654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.037025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.037065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.037344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.037383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.037769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.037809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.038105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.038117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.881 [2024-06-10 14:07:07.038349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.881 [2024-06-10 14:07:07.038361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.881 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.038644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.038658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.038952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.038992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.039341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.039381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.039739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.039779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.040080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.040120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.040348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.040387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.040740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.040766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.040990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.041002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.041221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.041233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.041454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.041494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.041848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.041890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.042228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.042268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.042664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.042705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.043065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.043078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.043363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.043375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.043688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.043728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.044046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.044086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.044394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.044434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.044745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.044785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.045006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.045045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.045303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.045315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.045621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.045661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.046030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.046069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.046414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.046454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.046755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.046796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.047105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.047144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.047491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.047530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.047986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.048065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.048463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.048506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.048828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.048872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.049119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.049159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.049429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.049442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.049675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.049687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.049884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.049924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.050228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.050268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.050523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.050563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.882 [2024-06-10 14:07:07.050897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.882 [2024-06-10 14:07:07.050937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.882 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.051318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.051358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.051598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.051639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.051923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.051963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.052328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.052373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.052611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.052652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.053025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.053078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.053482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.053495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.053733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.053745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.053964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.053986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.054168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.054187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.054482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.054494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.054813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.054826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.055080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.055092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.055371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.055383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.055613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.055626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.055863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.055875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.056095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.056107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.056365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.056377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.056627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.056639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.056930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.056942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.057208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.057220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.057457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.057469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.057723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.057735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.057989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.058001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.058254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.883 [2024-06-10 14:07:07.058266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.883 qpair failed and we were unable to recover it. 00:38:52.883 [2024-06-10 14:07:07.058525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.058537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.058766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.058778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.059085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.059097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.059340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.059352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.059513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.059525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.059815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.059827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.060077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.060090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.060255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.060267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.060573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.060589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.060850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.060862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.061166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.061178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.061421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.061433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.061743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.061755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.062042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.062054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.062232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.062244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.062530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.062542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.062832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.062844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.063125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.063137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.063422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.063436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.063600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.063613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.063897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.063908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.064024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.064036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.064208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.064220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.064397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.064409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.064716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.064729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.064959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.064971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.065294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.065306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.065623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.065636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.065943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.065955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.066174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.066186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.066399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.066411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.066634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.066646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.066956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.066968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.067131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.067143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.067464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.067476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.067695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.067707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.884 qpair failed and we were unable to recover it. 00:38:52.884 [2024-06-10 14:07:07.067923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.884 [2024-06-10 14:07:07.067935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.068165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.068177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.068460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.068472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.068796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.068808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.069044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.069056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.069227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.069239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.069407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.069419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.069657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.069670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.069902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.069915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.070140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.070153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.070494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.070533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.070864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.070877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.071121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.071133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.071368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.071380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.071549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.071561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.071887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.071927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.072209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.072248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.072600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.072641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.072873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.072912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.073192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.073204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.073522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.073562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.073854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.073893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.074289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.074335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.074736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.074775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.075029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.075068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.075345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.075385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.075567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.075615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.075884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.075896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.076134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.076174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.076465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.076504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.076892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.076932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.077281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.077320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.077606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.077646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.077932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.077944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.078245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.078284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.078590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.078630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.079012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.079052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.885 [2024-06-10 14:07:07.079401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.885 [2024-06-10 14:07:07.079440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.885 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.079799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.079811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.080135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.080175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.080420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.080460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.080744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.080784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.081135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.081174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.081424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.081436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.081668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.081680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.081899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.081911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.082077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.082089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.082403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.082443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.082837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.082877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.083204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.083244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.083615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.083656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.084052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.084092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.084390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.084429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.084750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.084790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.085091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.085130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.085433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.085445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.085662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.085674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.085919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.085932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.086242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.086254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.086411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.086423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.086642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.086655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.086914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.086927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.087244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.087258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.087498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.087510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.087841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.087854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.088071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.088083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.088343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.088355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.088656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.088669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.088927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.088939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.089169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.089181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.089417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.089431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.089756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.089769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.090075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.090088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.090248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.090260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.886 qpair failed and we were unable to recover it. 00:38:52.886 [2024-06-10 14:07:07.090488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.886 [2024-06-10 14:07:07.090500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.090810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.090823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.091060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.091072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.091306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.091318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.091597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.091609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.091920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.091932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.092097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.092109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.092262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.092274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.092533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.092545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.092779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.092791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.093142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.093156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.093428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.093441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.093684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.093697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.093861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.093874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.094135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.094147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.094408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.094421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.094673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.094686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.094911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.094924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.095148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.095161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.095487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.095501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.095725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.095738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.096053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.096066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.096237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.096249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.096534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.096547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.096860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.096873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.097032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.097045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.097222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.097235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.097487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.097499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.097814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.097831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.098000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.098015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.098239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.098252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.098538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.098551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.098715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.098728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.098974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.098988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.099305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.099318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.099485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.099497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.099732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.099750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.100060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.887 [2024-06-10 14:07:07.100073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.887 qpair failed and we were unable to recover it. 00:38:52.887 [2024-06-10 14:07:07.100306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.100319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.100570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.100589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.100830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.100843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.101062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.101075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.101300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.101313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.101618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.101631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.101861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.101873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.102103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.102116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.102351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.102365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.102585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.102598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.102914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.102927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.103163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.103178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.888 qpair failed and we were unable to recover it. 00:38:52.888 [2024-06-10 14:07:07.103342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.888 [2024-06-10 14:07:07.103355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.103654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.103667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.103954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.103966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.104182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.104195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.104417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.104429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.104596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.104610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.104841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.104854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.105075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.105088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.105392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.105406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.105666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.105680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.105907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.105922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.106209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.106221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.106482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.106494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.106802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.106815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.107061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.107074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.107358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.107371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.107704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.107717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.107886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.107899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.108233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.108248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.108536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.108549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.108730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.108743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.109003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.109016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.109323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.109336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.109589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.109601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.109820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.109832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.110059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.110071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.110375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.110388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.110692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.110706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.110941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.110953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.111263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.111276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.111511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.111524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.111689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.111702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.111931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.111944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.112202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.112215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.112443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.112456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.112638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.112652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.112891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.889 [2024-06-10 14:07:07.112903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.889 qpair failed and we were unable to recover it. 00:38:52.889 [2024-06-10 14:07:07.113189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.113202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.113418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.113430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.113645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.113657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.113949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.113962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.114240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.114252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.114494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.114506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.114747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.114759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.115045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.115057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.115305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.115317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.115550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.115562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.115865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.115877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.116178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.116190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.116521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.116534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.116698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.116711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.116893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.116905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.117143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.117155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.117393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.117405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.117642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.117654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.117912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.117924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.118101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.118113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.118400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.118412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.118653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.118667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.118899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.118939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.119295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.119335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.119681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.119721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.119945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.119957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.120190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.120202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.120377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.120389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.120728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.120768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.121118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.121157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.121431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.121443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.121686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.121726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.122073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.122112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.122343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.122381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.122690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.122731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.123115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.123154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.123368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.890 [2024-06-10 14:07:07.123407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.890 qpair failed and we were unable to recover it. 00:38:52.890 [2024-06-10 14:07:07.123722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.123779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.124147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.124159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.124354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.124365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.124677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.124717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.125039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.125078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.125455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.125494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.125787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.125827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.126178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.126218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.126583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.126595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.126890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.126929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.127224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.127264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.127549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.127621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.127994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.128033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.128276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.128315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.128614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.128655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.129020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.129059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.129432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.129471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.129749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.129789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.130090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.130130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.130459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.130498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.130866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.130906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.131188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.131227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.131526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.131565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.131924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.131964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.132320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.132365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.132726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.132767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.133146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.133185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.133477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.133516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.133896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.133936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.134243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.134287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.134595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.134635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.134930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.134942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.135178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.135217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.135539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.135605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.135887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.135926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.136066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.136078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.136319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.136358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.891 [2024-06-10 14:07:07.136710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.891 [2024-06-10 14:07:07.136750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.891 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.137058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.137098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.137446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.137485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.137729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.137769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.138063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.138075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.138361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.138373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.138591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.138619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.138929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.138982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.139275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.139314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.139676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.139717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.140066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.140105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.140490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.140530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.140904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.140946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.141306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.141317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.141485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.141497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.141743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.141755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.142039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.142051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.142345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.142357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.142699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.142739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.143022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.143061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.143362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.143402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.143761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.143800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.144196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.144236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.144586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.144632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.144929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.144968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.145313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.145324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.145580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.145592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.145776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.145790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.146015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.146027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.146213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.146224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.146515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.146553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.146743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.146783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.147092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.147131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.147365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.147405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.147634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.147675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.147889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.147929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.148245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.148257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.148588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.892 [2024-06-10 14:07:07.148629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.892 qpair failed and we were unable to recover it. 00:38:52.892 [2024-06-10 14:07:07.148976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.149015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.149397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.149436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.149803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.149843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.150196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.150221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.150620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.150659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.150966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.151005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.151378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.151417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.151712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.151752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.152117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.152157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.152536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.152585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.152938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.152978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.153347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.153386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.153750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.153791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.154084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.154123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.154363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.154375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.154659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.154672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.154918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.154934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.155181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.155194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.155510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.155549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.155862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.155902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.156271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.156310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.156682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.156722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.157085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.157124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.157405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.157416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.157769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.157809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.158184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.158224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.158504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.158544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.158904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.158944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.159305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.159345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.159728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.159768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.160127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.160166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.160526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.893 [2024-06-10 14:07:07.160538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.893 qpair failed and we were unable to recover it. 00:38:52.893 [2024-06-10 14:07:07.160866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.160907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.161257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.161296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.161649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.161689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.162007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.162055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.162371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.162410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.162688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.162728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.163052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.163092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.163376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.163415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.163716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.163757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.164127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.164167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.164552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.164716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.165028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.165068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.165455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.165495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.165800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.165842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.166081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.166120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.166502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.166541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.166930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.166970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.167323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.167335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.167632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.167655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.167891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.167930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.168209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.168248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.168536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.168548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.168856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.168868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.169085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.169097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.169266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.169312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.169613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.169653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.170023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.170063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.170408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.170447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.170828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.170869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.171046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.171086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.171367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.171418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.171703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.171744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.171969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.172008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.172354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.172394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.172758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.172798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.173191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.173230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.173551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.173601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.894 [2024-06-10 14:07:07.173983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.894 [2024-06-10 14:07:07.174023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.894 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.174369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.174408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.174688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.174727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.175025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.175059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.175324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.175336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.175622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.175662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.176028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.176068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.176315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.176355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.176702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.176741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.177034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.177074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.177351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.177391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.177678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.177718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.178125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.178164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.178516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.178550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.178802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.178842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.179139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.179178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.179346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.179386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.179751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.179791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.180153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.180193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.180434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.180446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.180700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.180712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.180991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.181003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.181369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.181409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.181635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.181675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.182048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.182087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.182438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.182477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.182849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.182889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.183186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.183231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.183616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.183657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.183985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.184025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.184258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.184297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.184694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.184734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.185032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.185072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.185314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.185353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.185603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.185615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.185930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.185969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.186261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.186301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.186595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.186607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.895 qpair failed and we were unable to recover it. 00:38:52.895 [2024-06-10 14:07:07.186949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.895 [2024-06-10 14:07:07.186988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.187362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.187402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.187770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.187811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.188176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.188216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.188394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.188405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.188713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.188725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.188943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.188955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.189282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.189322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.189620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.189661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.190065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.190104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.190453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.190492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.190876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.190888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.191194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.191233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.191479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.191519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.191840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.191881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.192197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.192237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.192597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.192638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.192950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.192990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.193262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.193274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.193595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.193636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.193927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.193967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.194255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.194267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.194589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.194629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.194929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.194969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.195317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.195356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.195671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.195683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.196014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.196053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.196353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.196392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.196690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.196731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.197135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.197180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.197467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.197507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.197901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.197942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.198232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.198272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.198643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.198683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.198978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.199017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.199280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.199292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.199634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.199675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.199979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.896 [2024-06-10 14:07:07.200019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.896 qpair failed and we were unable to recover it. 00:38:52.896 [2024-06-10 14:07:07.200390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.200429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.200664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.200704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.201071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.201111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.201382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.201394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.201725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.201765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.202166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.202206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.202504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.202516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.202706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.202718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.202966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.203005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.203305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.203345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.203726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.203767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.204070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.204109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.204344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.204355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.204659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.204672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.204985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.205024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.205322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.205361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.205657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.205697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.206064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.206104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.206484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.206524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.206899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.206940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.207243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.207283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.207608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.207648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.207926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.207966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.208246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.208284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.208598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.208639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.209009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.209049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.209398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.209438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.209714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.209726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.210038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.210078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.210368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.210408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.210688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.210728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.211098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.211142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.211465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.211503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.211909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.211954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.212244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.212256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.212553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.212603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.212922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.212962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.213263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.213303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.897 [2024-06-10 14:07:07.213673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.897 [2024-06-10 14:07:07.213714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.897 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.214085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.214124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.214420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.214460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.214673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.214713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.215061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.215100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.215401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.215440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.215739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.215779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.216140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.216180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.216528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.216567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.216951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.216991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.217368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.217408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.217772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.217813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.218098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.218138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.218477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.218489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.218716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.218728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.219036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.219075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.219447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.219486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.219844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.219885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.220260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.220299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.220671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.220711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.221052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.221092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.221480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.221520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.221831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.221871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.222262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.222302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.222655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.222667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.222887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.222899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.223137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.223149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.223333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.223372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.223693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.223733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.224078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.224118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.224425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.224437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.224715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.224755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.898 [2024-06-10 14:07:07.225127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.898 [2024-06-10 14:07:07.225166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.898 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.225450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.225465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.225686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.225698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.226011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.226050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.226403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.226442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.226821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.226861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.227230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.227270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.227556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.227567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.227896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.227908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.228153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.228165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.228403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.228415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.228704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.228743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.229095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.229134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.229428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.229468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.229761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.229773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.230011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.230023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.230330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.230370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.230759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.230800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.231120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.231159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.231404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.231443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.231684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.231697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.231998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.232037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.232385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.232425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.232719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.232731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.233048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.233087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.233323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.233364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.233605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.233645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.233971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.234010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.234242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.234254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.234556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.234606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.234904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.234944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.235293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.235332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.235719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.235760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.236041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.236081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.236371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.236411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.236777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.236817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.236997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.237036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.899 [2024-06-10 14:07:07.237418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.899 [2024-06-10 14:07:07.237457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.899 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.237708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.237747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.237993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.238033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.238334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.238374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.238662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.238676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.238921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.238934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.239254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.239295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.239644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.239684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.239974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.240014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.240353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.240394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.240761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.240801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.241197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.241237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.241530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.241541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.241858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.241899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.242272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.242311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.242689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.242701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.243000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.243039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.243331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.243371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.243597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.243637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.244033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.244073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.244436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.244476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.244837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.244849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.245148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.245188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.245482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.245522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.245828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.245868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.246251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.246290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.246523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.246562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.246881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.246920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.247201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.247241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.247618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.247659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.247967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.248006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.248306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.248346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.248627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.248668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.249036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.249075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.249318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.249358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.249650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.249691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.250041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.250080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.250372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.900 [2024-06-10 14:07:07.250412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.900 qpair failed and we were unable to recover it. 00:38:52.900 [2024-06-10 14:07:07.250710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.250751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.251050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.251089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.251385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.251424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.251732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.251744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.252035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.252047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.252318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.252330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.252614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.252628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.252818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.252829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.253056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.253068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.253291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.253303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.253543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.253555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.253872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.253913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.254204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.254242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.254493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.254504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.254692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.254704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.255018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.255057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.255406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.255445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.255810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.255823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.256073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.256084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.256320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.256331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.256634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.256657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.256969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.257009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.257241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.257280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.257593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.257633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.257910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.257949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.258247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.258286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.258465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.258504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.258776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.258788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.259074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.259086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.259323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.259335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.259572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.259620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.259989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.260029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.260417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.260457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.260850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.260891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.261216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.261255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.261534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.261546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.261727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.261740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.901 [2024-06-10 14:07:07.262044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.901 [2024-06-10 14:07:07.262056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.901 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.262359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.262370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.262699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.262711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.262891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.262903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.263144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.263183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.263564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.263629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.263859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.263899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.264202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.264242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.264598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.264638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.264852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.264897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.265288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.265327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.265716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.265756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.266055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.266095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.266442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.266482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.266831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.266871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.267179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.267219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.267509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.267548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.267928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.267969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.268290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.268329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.268629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.268668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.269046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.269085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.269433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.269482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.269788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.269827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.270132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.270172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.270497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.270509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.270760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.270772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.271138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.271177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.271506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.271545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.271987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.272027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.272399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.272438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.272721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.272733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.273017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.273029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.273329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.902 [2024-06-10 14:07:07.273368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.902 qpair failed and we were unable to recover it. 00:38:52.902 [2024-06-10 14:07:07.273713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.273754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.274041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.274081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.274318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.274357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.274663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.274704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.274942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.274981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.275336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.275375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.275758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.275798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.276165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.276204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.276554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.276566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.276972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.277011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.277384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.277424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.277787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.277827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.278121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.278161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.278398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.278410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.278717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.278729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.278963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.278976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.279200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.279245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.279617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.279660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.279944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.279957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.280191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.280203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.280486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.280527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.280903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.280942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.281247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.281286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.281589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.281630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.281877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.281916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.282271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.282310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.282597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.282636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.282895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.282907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.283141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.283153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.283403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.283415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.283710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.283752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.284127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.284166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.284342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.284381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.284608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.284620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.284927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.284940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.285226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.285239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.903 qpair failed and we were unable to recover it. 00:38:52.903 [2024-06-10 14:07:07.285532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.903 [2024-06-10 14:07:07.285571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.285958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.285998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.286294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.286334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.286562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.286612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.286808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.286820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.287115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.287127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.287413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.287425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.287786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.287827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.288199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.288238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.288487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.288499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.288727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.288740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.288964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.288976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.289190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.289202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.289432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.289444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.289738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.289750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.289967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.289979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.290198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.290210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.290528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.290567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.290806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.290846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.291136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.291175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.291549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.291601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.291927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.291966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.292366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.292405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.292715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.292727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.292955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.292966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.293223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.293235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.293451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.293463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.293805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.293845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.294143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.294182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.294476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.294516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.294834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.294875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.295158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.295197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.295567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.295616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.295896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.295935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.296265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.296305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.296628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.296669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.904 [2024-06-10 14:07:07.296973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.904 [2024-06-10 14:07:07.297013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.904 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.297342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.297382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.297691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.297704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.297887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.297899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.298148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.298160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.298387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.298399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.298635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.298647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.298890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.298902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.299223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.299263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.299617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.299658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.299910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.299932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.300213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.300253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.300623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.300663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.300951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.300991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.301367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.301406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.301777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.301817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.302173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.302214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.302429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.302468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.302704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.302716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.303027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.303039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.303258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.303270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.303499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.303511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.303668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.303680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.303848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.303881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.304262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.304308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.304476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.304487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.304727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.304767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.305046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.305086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.305316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.305355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.305635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.305647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.305906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.305918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.306160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.306172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.306484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.306523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.306915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.306956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.307307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.307346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.307645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.307686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.308056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.308096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.308463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.308501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.905 qpair failed and we were unable to recover it. 00:38:52.905 [2024-06-10 14:07:07.308767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.905 [2024-06-10 14:07:07.308808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.309172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.309211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.309587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.309627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.309906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.309945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.310246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.310286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.310544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.310555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.310844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.310856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.311141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.311153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.311460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.311472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.311712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.311725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.311941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.311954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.312246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.312285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.312529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.312569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.312926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.312939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.313232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.313272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.313504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.313543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.313792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.313804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.314058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.314071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.314311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.314350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.314637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.314678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.314950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.314963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.315200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.315213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.315497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.315510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.315737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.315750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.316001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.316015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.316327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.316367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.316729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.316775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.317109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.317149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.317438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.317477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.317758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.317798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.318015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.318055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.318298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.318338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.318600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.318613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.318853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.318866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.319182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.319221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.319436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.319477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.319705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.319747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.906 qpair failed and we were unable to recover it. 00:38:52.906 [2024-06-10 14:07:07.320054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.906 [2024-06-10 14:07:07.320093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.320319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.320359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.320658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.320706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.321025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.321038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.321332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.321372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.321611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.321653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.321940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.321953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.322267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.322307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.322657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.322699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.322930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.322943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.323264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.323304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.323618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.323659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.324025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.324038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.324344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.324382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.324702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.324742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.325090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.325131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.325506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.325547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.325836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.325876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.326250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.326290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.326594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.326635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.326930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.326970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.327269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.327308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.327596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.327638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.328005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.328044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.328439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.328478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.328744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.328757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.328937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.328950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.329163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.329176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.329405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.329444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.329797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.329845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.330093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.330150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.330479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.330544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.907 [2024-06-10 14:07:07.330944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.907 [2024-06-10 14:07:07.330959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.907 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.331299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.331312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.331573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.331643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.331924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.331964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.332283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.332325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.332660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.332680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.332920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.332935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.333248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.333288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.333589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.333630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.333857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.333900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.334160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.334214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:52.908 [2024-06-10 14:07:07.334614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.908 [2024-06-10 14:07:07.334630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:52.908 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.334808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.334822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.335079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.335093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.335338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.335351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.335586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.335599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.335751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.335764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.336009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.336022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.336328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.336341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.336604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.336617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.336834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.336847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.337071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.337084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.337183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.337196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.337504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.337516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.337766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.337782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.338069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.338082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.338331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.338371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.338693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.338741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.339033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.339073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.339355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.339396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.339617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.339657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.339939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.180 [2024-06-10 14:07:07.339979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.180 qpair failed and we were unable to recover it. 00:38:53.180 [2024-06-10 14:07:07.340291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.340331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.340513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.340527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.340745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.340759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.341003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.341043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.341347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.341387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.341706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.341720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.341949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.341989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.342283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.342324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.342596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.342609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.342848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.342861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.343025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.343065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.343416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.343455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.343799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.343840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.344238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.344279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.344631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.344644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.344995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.345035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.345356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.345396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.345686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.345699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.345923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.345963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.346363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.346403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.346808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.346849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.347094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.347134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.347445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.347485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.347834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.347875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.348240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.348280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.348591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.348633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.349019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.349059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.349451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.349495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.349731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.349745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.349866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.349878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.350183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.350224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.350565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.350616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.351014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.351060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.351347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.351387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.351782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.351823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.352177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.352217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.352450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.352490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.352790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.181 [2024-06-10 14:07:07.352830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.181 qpair failed and we were unable to recover it. 00:38:53.181 [2024-06-10 14:07:07.353111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.353152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.353501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.353542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.353836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.353877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.354181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.354222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.354453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.354493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.354856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.354870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.355107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.355120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.355412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.355425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.355741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.355783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.356076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.356116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.356360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.356400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.356702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.356743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.356976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.356989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.357224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.357237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.357454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.357467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.357708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.357722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.357964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.358004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.358288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.358328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.358613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.358655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.358973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.359013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.359246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.359286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.359663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.359704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.360051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.360092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.360441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.360481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.360754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.360767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.360983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.360997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.361097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.361110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.361349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.361390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.361683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.361696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.361916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.361928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.362050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.362062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.362303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.362342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.362709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.362753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.362994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.363007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.363400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.363445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.363817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.363859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.364120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.364133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.364303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.364316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.182 qpair failed and we were unable to recover it. 00:38:53.182 [2024-06-10 14:07:07.364619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.182 [2024-06-10 14:07:07.364632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.364874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.364887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.365118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.365131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.365423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.365436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.365771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.365813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.366156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.366196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.366567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.366618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.366985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.367026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.367381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.367421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.367642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.367656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.367910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.367923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.368092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.368106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.368398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.368438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.368666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.368707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.369002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.369015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.369324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.369337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.369656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.369698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.369997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.370038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.370205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.370245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.370558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.370609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.370890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.370903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.371028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.371068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.371290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.371330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.371694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.371736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.372051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.372092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.372312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.372351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.372740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.372781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.373154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.373194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.373558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.373606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.373901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.373941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.374103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.374145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.374460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.374511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.374746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.374759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.374975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.374988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.375170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.375183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.375492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.375532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.375711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.375726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.375953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.375994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.183 [2024-06-10 14:07:07.376361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.183 [2024-06-10 14:07:07.376401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.183 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.376710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.376723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.377024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.377065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.377384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.377424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.377774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.377816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.378122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.378163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.378511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.378550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.378841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.378881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.379239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.379279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.379530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.379570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.379872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.379912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.380207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.380247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.380639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.380686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.380979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.380998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.381307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.381332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.381563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.381597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.381922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.381963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.382328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.382367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.382658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.382671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.382922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.382962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.383242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.383282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.383632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.383674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.383973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.384012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.384244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.384284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.384640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.384654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.384885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.384898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.385208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.385248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.385533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.385557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.385809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.385822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.386118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.386158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.386374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.386415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.386716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.386757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.387127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.387167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.387411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.387451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.184 [2024-06-10 14:07:07.387771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.184 [2024-06-10 14:07:07.387785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.184 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.388007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.388047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.388345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.388386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.388726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.388739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.388996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.389012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.389306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.389346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.389694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.389734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.390022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.390035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.390204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.390244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.390613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.390654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.390999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.391039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.391365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.391404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.391661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.391675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.391914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.391927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.392211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.392224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.392464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.392478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.392724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.392737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.393040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.393053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.393363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.393403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.393699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.393740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.394082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.394095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.394246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.394259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.394544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.394558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.394874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.394915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.395231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.395271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.395643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.395683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.396025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.396038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.396262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.396275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.396442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.396455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.396682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.396723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.397020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.397060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.397415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.397456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.397665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.397678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.397904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.397944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.398318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.398358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.398636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.398677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.185 qpair failed and we were unable to recover it. 00:38:53.185 [2024-06-10 14:07:07.398978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.185 [2024-06-10 14:07:07.399018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.399397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.399437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.399726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.399739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.400032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.400071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.400439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.400480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.400771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.400784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.401076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.401090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.401315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.401356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.401652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.401699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.401989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.402001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.402319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.402359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.402608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.402649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.402869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.402922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.403228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.403240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.403547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.403595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.403898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.403938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.404219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.404258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.404536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.404588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.404987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.405028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.405393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.405433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.405811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.405852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.406213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.406253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.406622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.406663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.407038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.407078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.407385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.407425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.407796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.407837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.408162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.408201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.408428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.408468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.408818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.408858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.409260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.409300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.409647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.409688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.410042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.410083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.410476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.410515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.410895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.410936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.411267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.411307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.411688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.411729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.412057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.412098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.412494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.412534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.412781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.186 [2024-06-10 14:07:07.412795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.186 qpair failed and we were unable to recover it. 00:38:53.186 [2024-06-10 14:07:07.413109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.413149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.413394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.413434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.413731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.413773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.414019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.414061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.414347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.414360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.414675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.414715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.415111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.415151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.415433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.415473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.415891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.415932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.416272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.416286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.416513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.416526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.416768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.416781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.417008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.417022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.417241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.417254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.417540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.417553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.417726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.417739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.417997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.418036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.418411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.418451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.418840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.418853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.419070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.419082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.419245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.419258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.419566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.419582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.419900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.419940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.420325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.420364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.420655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.420696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.420999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.421039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.421321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.421361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.421755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.421796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.422120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.422161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.422389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.422429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.422725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.422739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.423051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.423091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.423321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.423361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.423732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.423773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.424057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.424096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.424469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.424508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.424835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.424848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.425152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.187 [2024-06-10 14:07:07.425166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.187 qpair failed and we were unable to recover it. 00:38:53.187 [2024-06-10 14:07:07.425396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.425409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.425509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.425522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.425837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.425878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.426106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.426146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.426390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.426430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.426719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.426732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.426952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.426966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.427189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.427202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.427366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.427379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.427665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.427696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.427985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.428025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.428417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.428462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.428834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.428875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.429255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.429268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.429583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.429595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.429959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.430000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.430376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.430416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.430648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.430690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.431040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.431080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.431455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.431495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.431862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.431896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.432221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.432261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.432556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.432605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.432843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.432884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.433245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.433258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.433478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.433501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.433766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.433807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.434179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.434219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.434506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.434545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.434870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.434911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.435211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.435250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.435550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.435600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.435981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.436022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.436322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.436361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.436672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.436713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.436996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.437037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.437423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.437436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.437726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.437767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.188 qpair failed and we were unable to recover it. 00:38:53.188 [2024-06-10 14:07:07.438050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.188 [2024-06-10 14:07:07.438090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.438438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.438479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.438821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.438834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.439063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.439076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.439357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.439370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.439560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.439573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.439738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.439778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.440061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.440101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.440345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.440385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.440778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.440819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.441034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.441074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.441449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.441488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.441778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.441791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.442085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.442131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.442480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.442519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.442847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.442888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.443251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.443264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.443364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.443377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.443687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.443699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.443968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.444008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.444261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.444301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.444673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.444714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.445079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.445120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.445401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.445441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.445722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.445763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.446118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.446158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.446523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.446563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.446939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.446980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.447338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.447351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.447617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.447650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.447951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.447992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.448361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.448374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.448563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.448617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.189 qpair failed and we were unable to recover it. 00:38:53.189 [2024-06-10 14:07:07.448903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.189 [2024-06-10 14:07:07.448916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.449104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.449144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.449465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.449505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.449902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.449944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.450242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.450282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.450530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.450570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.450926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.450940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.451251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.451263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.451587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.451628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.451991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.452032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.452382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.452422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.452770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.452811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.453162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.453202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.453497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.453536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.453924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.453965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.454259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.454300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.454599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.454640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.454919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.454933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.455256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.455296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.455666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.455708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.455995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.456010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.456306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.456345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.456687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.456728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.456953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.456967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.457150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.457163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.457411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.457423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.457649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.457691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.458064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.458103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.458286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.458299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.458626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.458668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.458946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.458986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.459333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.459374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.459681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.459721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.459891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.459904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.460198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.460239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.460557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.460608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.460925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.460939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.461188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.461201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.190 [2024-06-10 14:07:07.461436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.190 [2024-06-10 14:07:07.461449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.190 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.461704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.461756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.462057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.462097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.462343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.462382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.462679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.462717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.463018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.463058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.463347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.463386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.463667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.463708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.464063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.464102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.464481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.464522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.464825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.464866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.465241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.465281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.465595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.465637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.465997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.466038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.466455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.466494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.466879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.466920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.467203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.467243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.467561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.467613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.467984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.468024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.468387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.468426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.468786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.468827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.469168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.469208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.469612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.469659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.469969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.470009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.470347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.470380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.470754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.470796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.471023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.471063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.471366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.471406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.471708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.471749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.471929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.471969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.472274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.472287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.472588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.472629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.472980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.473020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.473310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.473322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.473548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.473561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.473869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.473882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.474113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.474126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.474437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.474477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.474781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.191 [2024-06-10 14:07:07.474821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.191 qpair failed and we were unable to recover it. 00:38:53.191 [2024-06-10 14:07:07.475158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.475199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.475482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.475522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.475879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.475921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.476251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.476291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.476594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.476634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.477002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.477042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.477353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.477393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.477695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.477736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.477989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.478001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.478239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.478252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.478545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.478597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.478976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.479016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.479267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.479280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.479609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.479651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.479937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.479978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.480256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.480269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.480553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.480566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.480852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.480864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.481082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.481095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.481401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.481441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.481739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.481779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.482078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.482091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.482330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.482371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.482665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.482710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.483034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.483074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.483444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.483484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.483798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.483839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.484246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.484258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.484587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.484628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.484942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.484983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.485294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.485333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.485715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.485756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.486037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.486050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.486240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.486281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.486570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.486621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.486903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.486933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.487265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.487305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.487664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.192 [2024-06-10 14:07:07.487713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.192 qpair failed and we were unable to recover it. 00:38:53.192 [2024-06-10 14:07:07.487970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.487983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.488206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.488247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.488596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.488636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.488984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.489024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.489376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.489388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.489538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.489597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.489970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.490010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.490307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.490347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.490650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.490691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.490972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.491012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.491263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.491304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.491652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.491694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.491977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.491991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.492212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.492252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.492623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.492664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.492974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.493015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.493389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.493430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.493738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.493779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.494097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.494138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.494435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.494476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.494707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.494749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.495034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.495074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.495314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.495354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.495720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.495762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.495999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.496012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.496183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.496196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.496424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.496464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.496639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.496680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.497028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.497041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.497280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.497293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.497591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.497632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.497886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.497926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.498255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.498295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.498603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.498643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.498965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.499007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.499362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.499402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.499656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.499697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.500040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.500080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.193 [2024-06-10 14:07:07.500409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.193 [2024-06-10 14:07:07.500449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.193 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.500765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.500806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.501107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.501148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.501505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.501545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.501791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.501832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.502077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.502090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.502400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.502440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.502725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.502765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.503019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.503059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.503413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.503454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.503804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.503845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.504071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.504112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.504461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.504501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.504867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.504908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.505208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.505254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.505540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.505590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.505810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.505850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.506153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.506185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.506358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.506372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.506647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.506688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.506926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.506966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.507274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.507287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.507570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.507623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.507855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.507895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.508186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.508217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.508527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.508540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.508776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.508790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.509072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.509113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.509420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.509460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.509680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.509721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.510087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.510128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.510497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.510537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.510847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.510888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.194 [2024-06-10 14:07:07.511127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.194 [2024-06-10 14:07:07.511166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.194 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.511404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.511444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.511794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.511835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.512121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.512160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.512453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.512493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.512787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.512829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.513057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.513096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.513329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.513369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.513618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.513660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.513984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.514023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.514311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.514351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.514636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.514676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.515022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.515062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.515266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.515279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.515564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.515582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.515888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.515901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.516086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.516099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.516336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.516349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.516543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.516556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.516735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.516748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.516929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.516969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.517254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.517301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.517515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.517555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.517796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.517836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.518062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.518075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.518299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.518338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.518691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.518732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.519124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.519165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.519471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.519483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.519636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.519650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.519956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.519968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.520201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.520213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.520450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.520463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.520763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.520776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.521107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.521119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.521349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.521390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.521627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.521668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.521973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.521986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.522226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.195 [2024-06-10 14:07:07.522239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.195 qpair failed and we were unable to recover it. 00:38:53.195 [2024-06-10 14:07:07.522521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.522562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.522938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.522978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.523255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.523267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.523484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.523497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.523716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.523729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.523973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.524013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.524239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.524279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.524445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.524486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.524724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.524765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.524931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.524971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.525297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.525337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.525688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.525729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.526008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.526048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.526429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.526469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.526750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.526791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.526978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.526992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.527222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.527262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.527657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.527698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.527999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.528012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.528189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.528202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.528428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.528468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.528683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.528723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.529097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.529143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.529361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.529401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.529618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.529660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.530007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.530047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.530289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.530302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.530522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.530536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.530783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.530796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.531083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.531096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.531260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.531273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.531559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.531572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.531866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.531880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.532044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.532057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.532365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.532378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.532610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.532624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.532845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.532859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.533115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.533128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.196 qpair failed and we were unable to recover it. 00:38:53.196 [2024-06-10 14:07:07.533346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.196 [2024-06-10 14:07:07.533359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.533643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.533656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.533889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.533902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.534052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.534065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.534300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.534313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.534479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.534492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.534642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.534655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.534873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.534886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.535128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.535142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.535374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.535387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.535626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.535639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.535954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.535967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.536218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.536232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.536383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.536396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.536724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.536737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.537046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.537059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.537369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.537382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.537613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.537626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.537778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.537792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.538025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.538038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.538277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.538290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.538454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.538467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.538634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.538647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.538905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.538946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.539296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.539342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.539594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.539635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.539951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.539992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.540351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.540390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.540630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.540672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.541052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.541091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.541394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.541434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.541738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.541778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.542123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.542156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.542448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.542487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.542849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.542890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.543191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.543204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.543520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.543560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.543946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.197 [2024-06-10 14:07:07.543986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.197 qpair failed and we were unable to recover it. 00:38:53.197 [2024-06-10 14:07:07.544364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.544404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.544701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.544741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.545046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.545086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.545302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.545343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.545700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.545741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.546034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.546073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.546415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.546449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.546780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.546821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.547196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.547236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.547602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.547643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.547891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.547931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.548255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.548295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.548596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.548636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.548947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.548988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.549329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.549342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.549508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.549521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.549703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.549717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.549945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.549985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.550228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.550269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.550549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.550598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.550885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.550926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.551229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.551270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.551640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.551681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.552053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.552093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.552387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.552428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.552775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.552816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.553197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.553243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.553611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.553652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.554003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.554043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.554261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.554275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.554638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.554678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.555056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.555097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.198 [2024-06-10 14:07:07.555386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.198 [2024-06-10 14:07:07.555399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.198 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.555661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.555694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.556068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.556108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.556392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.556405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.556654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.556695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.557028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.557068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.557365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.557404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.557636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.557677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.557912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.557953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.558245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.558258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.558587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.558628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.558923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.558963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.559232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.559245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.559621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.559662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.560035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.560076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.560241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.560281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.560650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.560690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.560984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.561025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.561398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.561438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.561796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.561838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.562075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.562089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.562383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.562423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.562786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.562827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.563125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.563165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.563520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.563560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.563798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.563838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.564131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.564171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.564472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.564512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.564848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.564888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.565199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.565222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.565487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.565527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.565906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.565948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.566239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.566279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.566530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.566570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.566882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.566926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.567193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.567238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.567454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.567494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.567810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.567857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.568148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.199 [2024-06-10 14:07:07.568188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.199 qpair failed and we were unable to recover it. 00:38:53.199 [2024-06-10 14:07:07.568487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.568528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.568852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.568893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.569262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.569302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.569645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.569686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.570036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.570076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.570357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.570396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.570750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.570790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.571143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.571183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.571573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.571624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.571851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.571892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.572257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.572297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.572666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.572707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.573058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.573098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.573328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.573375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.573706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.573747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.574027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.574067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.574352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.574365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.574530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.574570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.574939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.574980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.575263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.575302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.575696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.575737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.576048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.576088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.576323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.576364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.576712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.576770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.577076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.577116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.577404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.577444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.577815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.577856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.578205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.578245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.578530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.578570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.578880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.578920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.579215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.579255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.579564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.579612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.579961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.580002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.580318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.580358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.580654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.580695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.581043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.581090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.581378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.581390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.581702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.200 [2024-06-10 14:07:07.581742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.200 qpair failed and we were unable to recover it. 00:38:53.200 [2024-06-10 14:07:07.582122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.582162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.582443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.582483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.582763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.582804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.583096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.583109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.583326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.583367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.583760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.583801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.584094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.584135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.584485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.584497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.584679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.584692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.584907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.584920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.585220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.585233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.585541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.585600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.585901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.585941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.586278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.586290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.586593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.586634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.587002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.587042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.587427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.587467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.587746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.587787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.588069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.588108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.588413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.588452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.588679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.588720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.588952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.588992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.589364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.589405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.589719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.589759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.590046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.590086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.590323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.590368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.590490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.590503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.590753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.590794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.591167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.591207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.591438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.591451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.591732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.591746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.592078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.592118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.592498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.592538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.592945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.592986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.593271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.593311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.593623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.593664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.593966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.594007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.594355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.594401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.201 qpair failed and we were unable to recover it. 00:38:53.201 [2024-06-10 14:07:07.594619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.201 [2024-06-10 14:07:07.594660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.594955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.595005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.595323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.595363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.595731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.595772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.596152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.596192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.596466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.596478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.596763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.596803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.597155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.597194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.597543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.597593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.597990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.598030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.598317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.598330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.598645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.598685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.599045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.599085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.599361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.599373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.599671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.599712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.600083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.600124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.600415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.600428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.600668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.600708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.601003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.601044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.601395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.601435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.601713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.601754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.602124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.602164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.602536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.602587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.602938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.602978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.603307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.603348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.603707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.603748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.604059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.604100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.604444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.604484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.604785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.604827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.605176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.605216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.605442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.605482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.605780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.605820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.606169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.606209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.606545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.606557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.606795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.606808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.607122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.607162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.607458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.607499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.607887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.607928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.202 [2024-06-10 14:07:07.608239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.202 [2024-06-10 14:07:07.608279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.202 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.608586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.608634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.608881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.608922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.609226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.609270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.609515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.609528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.609731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.609772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.610146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.610187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.610483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.610523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.610863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.610905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.611275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.611288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.611590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.611604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.611840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.611853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.612089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.612102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.612343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.612383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.612734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.612775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.613204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.613245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.613596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.613637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.613931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.613971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.614346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.614386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.614757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.614798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.615167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.615207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.615571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.615621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.615854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.615894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.616184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.616225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.616526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.616566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.616857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.616897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.617252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.617293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.617629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.617671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.618034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.618075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.618380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.618420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.618789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.618830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.619187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.619228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.619511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.203 [2024-06-10 14:07:07.619551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.203 qpair failed and we were unable to recover it. 00:38:53.203 [2024-06-10 14:07:07.619968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.620008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.620392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.620432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.620739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.620780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.621097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.621137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.621389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.621429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.621800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.621841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.622167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.622207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.622614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.622656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.622890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.622936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.623307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.623347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.623703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.623743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.624111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.624151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.624423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.624436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.624700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.624741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.625030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.625069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.625440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.625479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.625836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.625849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.626087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.626100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.626325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.626365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.626607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.626649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.626965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.627004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.627376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.627416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.627683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.627696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.628008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.628047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.628405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.628445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.628716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.628729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.628912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.628926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.629042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.629055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.629341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.629385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.629667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.629708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.629988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.630028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.630401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.630441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.630789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.630831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.631217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.631257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.631650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.204 [2024-06-10 14:07:07.631690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.204 qpair failed and we were unable to recover it. 00:38:53.204 [2024-06-10 14:07:07.632048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.632089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.632315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.632328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.632631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.632672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.633047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.633103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.633475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.633490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.633800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.633814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.634052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.634092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.634443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.634483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.634812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.634854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.635224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.635278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.635663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.635679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.635994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.636033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.636384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.636424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.636726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.636743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.636985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.637013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.205 [2024-06-10 14:07:07.637395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.205 [2024-06-10 14:07:07.637451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.205 qpair failed and we were unable to recover it. 00:38:53.490 [2024-06-10 14:07:07.637851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.490 [2024-06-10 14:07:07.637866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.490 qpair failed and we were unable to recover it. 00:38:53.490 [2024-06-10 14:07:07.638085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.490 [2024-06-10 14:07:07.638099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.490 qpair failed and we were unable to recover it. 00:38:53.490 [2024-06-10 14:07:07.638384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.490 [2024-06-10 14:07:07.638397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.490 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.638695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.638708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.638938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.638951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.639236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.639249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.639546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.639559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.639717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.639731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.639905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.639918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.640137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.640151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.640402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.640416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.640649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.640662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.640885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.640897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.641072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.641112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.641396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.641435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.641663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.641704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.642009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.642048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.642416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.642457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.642831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.642872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.643227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.643267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.643616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.643657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.643960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.644001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.644369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.644409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.644756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.644796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.645108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.645149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.491 [2024-06-10 14:07:07.645513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.491 [2024-06-10 14:07:07.645525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.491 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.645854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.645896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.646268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.646308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.646601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.646614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.646781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.646821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.647194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.647234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.647544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.647557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.647875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.647916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.648209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.648249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.648595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.648636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.648937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.648977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.649302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.649342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.649642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.649689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.649933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.649974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.650259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.650299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.650585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.650598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.650830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.650843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.651082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.651095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.651269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.651282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.651550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.651600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.651977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.652016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.652317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.652358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.652711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.652724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.653009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.492 [2024-06-10 14:07:07.653022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.492 qpair failed and we were unable to recover it. 00:38:53.492 [2024-06-10 14:07:07.653276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.653289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.653523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.653537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.653765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.653778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.654072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.654112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.654508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.654549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.654873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.654914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.655263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.655302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.655624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.655666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.656029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.656069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.656311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.656352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.656649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.656662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.656885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.656899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.657147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.657160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.657400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.657413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.657568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.657591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.657808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.657822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.658048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.658061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.658224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.658237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.658544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.658557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.658894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.658930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.659175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.659216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.659442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.659482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.493 [2024-06-10 14:07:07.659727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.493 [2024-06-10 14:07:07.659768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.493 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.660063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.660103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.660390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.660430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.660683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.660696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.660935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.660949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.661133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.661173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.661523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.661562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.661814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.661855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.662203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.662242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.662590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.662631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.662876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.662916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.663261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.663301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.663630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.663671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.663996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.664036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.664331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.664371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.664758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.664799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.665044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.665084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.665265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.665305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.665531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.665544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.665788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.665802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.666048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.666088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.666394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.666434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.666755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.666768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.667015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.667046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.667394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.667434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.667661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.667675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.667860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.667899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.668245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.668285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.494 qpair failed and we were unable to recover it. 00:38:53.494 [2024-06-10 14:07:07.668568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.494 [2024-06-10 14:07:07.668586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.668873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.668886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.669065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.669078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.669254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.669296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.669609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.669649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.669886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.669932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.670219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.670258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.670587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.670628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.670948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.670988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.671272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.671312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.671599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.671640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.671878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.671918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.672162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.672202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.672574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.672624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.672895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.672936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.673172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.673213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.673625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.673666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.674017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.674057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.674336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.674387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.674693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.674706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.675019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.675032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.675262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.675302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.675592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.675634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.675854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.675895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.676175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.676215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.676518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.676558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.495 qpair failed and we were unable to recover it. 00:38:53.495 [2024-06-10 14:07:07.676881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.495 [2024-06-10 14:07:07.676895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.677114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.677127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.677301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.677342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.677656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.677702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.677992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.678034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.678412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.678472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.678877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.678891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.679136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.679149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.679366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.679379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.679620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.679661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.679950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.679991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.680227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.680267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.680568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.680619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.680994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.681034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.681397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.681410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.681605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.681619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.681947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.681989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.682220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.682260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.682560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.682609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.682963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.683016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.683436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.683477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.683851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.683892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.684137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.684177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.684400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.684413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.684700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.684714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.684967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.684979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.685207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.685246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.685621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.685663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.685998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.686039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.686340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.686380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.686756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.686797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.687147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.687187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.687475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.687515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.687803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.687844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.688200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.688242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.688595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.688636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.688883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.688923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.689293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.689333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.689655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.689697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.690070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.690111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.496 [2024-06-10 14:07:07.690410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.496 [2024-06-10 14:07:07.690424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.496 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.690758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.690772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.691058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.691071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.691317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.691357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.691688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.691729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.691911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.691952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.692278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.692319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.692691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.692733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.693028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.693068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.693368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.693409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.693772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.693814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.694039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.694081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.694431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.694472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.694720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.694734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.694954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.694967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.695203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.695216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.695518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.695558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.695803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.695843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.696075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.696116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.696426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.696479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.696707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.696721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.697015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.697054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.697406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.697446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.697735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.697749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.698060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.698100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.698399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.698439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.698779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.698792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.699002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.699015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.699199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.699211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.699494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.699506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.699687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.699700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.699885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.699925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.700289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.700337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.700655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.700669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.700969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.701010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.701170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.701210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.701491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.701531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.701848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.701890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.702120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.702160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.702457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.702498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.702890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.702931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.703294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.703335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.703622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.497 [2024-06-10 14:07:07.703663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.497 qpair failed and we were unable to recover it. 00:38:53.497 [2024-06-10 14:07:07.703962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.704003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.704329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.704369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.704599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.704640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.704883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.704923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.705208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.705247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.705447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.705460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.705630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.705643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.705862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.705875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.706183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.706196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.706414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.706426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.706740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.706780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.707071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.707112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.707336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.707376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.707658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.707700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.707996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.708037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.708386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.708425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.708724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.708740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.709024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.709038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.709210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.709223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.709458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.709471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.709659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.709673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.709824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.709837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.710005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.710018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.710293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.710334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.710618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.710659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.711008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.711049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.711300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.711340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.711622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.711663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.711906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.711947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.712235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.712275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.712564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.712631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.712942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.712955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.713248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.713289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.713642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.713683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.714002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.714017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.714234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.714248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.714488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.714502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.714821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.714863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.715179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.715219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.715449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.715490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.715761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.715803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.716045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.716086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.716434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.716474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.716766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.716779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.717010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.717024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.717205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.717218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.717463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.717476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.717763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.717777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.717931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.717945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.718114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.718128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.718310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.718324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.718502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.718515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.718677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.718690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.718858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.718871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.719034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.719047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.719213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.719226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.719401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.719416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.719602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.719643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.498 qpair failed and we were unable to recover it. 00:38:53.498 [2024-06-10 14:07:07.720020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.498 [2024-06-10 14:07:07.720060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.720304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.720345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.720655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.720669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.720971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.721011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.721292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.721332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.721555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.721568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.721732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.721745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.721947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.721987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.722287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.722328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.722625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.722639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.722881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.722894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.723074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.723087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.723326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.723367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.723644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.723685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.723923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.723963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.724197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.724237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.724554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.724610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.724829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.724869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.725057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.725070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.725292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.725305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.725457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.725470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.726836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.726860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.727116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.727130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.727389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.727430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.727806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.727851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.728098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.728112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.728283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.728297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.728531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.728544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.728703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.728716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.729022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.729057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.729414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.729454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.729744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.729758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.729913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.729954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.730239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.730278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.730499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.730512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.730703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.730744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.731058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.731099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.731342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.731383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.731622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.731669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.732011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.732025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.732196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.732210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.732377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.732391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.732550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.732564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.732736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.732750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.733041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.733081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.733363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.733404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.733631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.733673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.734047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.734087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.734321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.734361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.734657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.734671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.734907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.734921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.735147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.735160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.735380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.735394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.735611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.735624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.735776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.499 [2024-06-10 14:07:07.735790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.499 qpair failed and we were unable to recover it. 00:38:53.499 [2024-06-10 14:07:07.736061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.736074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.736296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.736310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.736555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.736568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.736739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.736752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.736876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.736916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.737200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.737240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.737601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.737643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.737947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.737988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.738376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.738417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.738696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.738737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.739026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.739039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.739228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.739241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.739463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.739476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.739728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.739769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.740119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.740160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.740445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.740485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.740726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.740768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.741052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.741093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.741373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.741414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.741721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.741763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.742071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.742112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.742405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.742445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.742656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.742670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.742990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.743038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.743274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.743315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.743604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.743645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.743947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.743988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.744211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.744251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.744548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.744561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.745686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.745712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.745930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.745945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.746173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.746215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.500 qpair failed and we were unable to recover it. 00:38:53.500 [2024-06-10 14:07:07.746603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.500 [2024-06-10 14:07:07.746645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.746935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.746948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.747039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.747052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.747213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.747227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.747528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.747542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.747854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.747868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.748087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.748101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.748262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.748276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.748582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.748595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.748835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.748848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.749134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.749148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.749319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.749333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.749502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.749515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.749805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.749819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.749983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.749997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.750167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.750180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.750356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.750370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.750498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.750512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.750676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.750690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.750946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.750960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.751262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.751275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.751490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.751503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.751671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.751686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.751845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.751859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.752073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.752086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.752306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.752320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.752537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.752551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.752714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.752728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.752902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.752916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.753152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.753166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.753396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.753410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.753645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.753660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.753844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.753857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.754090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.754103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.754364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.754378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.754550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.754564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.754866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.754879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.755113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.755126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.755279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.755293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.501 [2024-06-10 14:07:07.755516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.501 [2024-06-10 14:07:07.755530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.501 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.755779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.755794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.755957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.755970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.756135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.756148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.756378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.756392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.756698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.756711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.756871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.756884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.757177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.757190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.757367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.757381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.757612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.757625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.757804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.757817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.757995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.758009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.758277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.758290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.758532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.758545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.758839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.758852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.759086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.759100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.759344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.759357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.759540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.759553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.759820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.759834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.760073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.760087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.760323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.760337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.760508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.760521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.760838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.760852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.761081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.761094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.761339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.761353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.761512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.761526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.761750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.761763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.761939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.761953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.762119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.762133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.762376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.762389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.762538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.762551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.762797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.762811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.763028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.763043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.763286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.763299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.763534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.763547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.763713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.763726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.763962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.763976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.764154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.764167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.764394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.764407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.764640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.764653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.764884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.764897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.765134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.765147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.765438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.765451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.765670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.765683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.765992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.766005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.766238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.766252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.766420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.766433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.766617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.766631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.766860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.766873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.502 [2024-06-10 14:07:07.767128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.502 [2024-06-10 14:07:07.767141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.502 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.767378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.767391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.767619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.767632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.767920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.767933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.768260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.768273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.768557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.768570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.768687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.768700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.768930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.768943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.769186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.769199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.769439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.769452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.769693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.769707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.769994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.770007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.770293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.770306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.770564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.770582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.770836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.770849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.770997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.771010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.771239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.771253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.771473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.771487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.771797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.771811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.772048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.772061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.772225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.772238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.772404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.772417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.772701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.772715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.773025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.773040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.773222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.773235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.773522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.773535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.773834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.773848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.774082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.774096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.774332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.774345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.774569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.774587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.774897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.774910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.775207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.775220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.775451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.775464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.775628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.775641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.775826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.775838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.776079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.776093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.776255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.776268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.776610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.776624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.776800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.776813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.776978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.776991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.777290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.777303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.777521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.777534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.777756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.777770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.777990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.778004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.778317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.778330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.778551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.778564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.778786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.778799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.779087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.779101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.779406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.779418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.779534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.779547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.779737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.503 [2024-06-10 14:07:07.779750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.503 qpair failed and we were unable to recover it. 00:38:53.503 [2024-06-10 14:07:07.780028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.780041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.780272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.780286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.780581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.780594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.780838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.780852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.781094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.781107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.781341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.781354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.781532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.781546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.781833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.781847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.782152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.782165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.782473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.782487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.782794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.782807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.783033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.783046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.783295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.783310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.783561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.783586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.783881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.783894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.784049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.784063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.784392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.784405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.784656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.784669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.784979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.784992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.785287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.785299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.785529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.785543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.785728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.785742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.785972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.785985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.786295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.786308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.786529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.786542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.786784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.786797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.787104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.787117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.787289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.787303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.787474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.787487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.787747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.787761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.787928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.787941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.788185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.504 [2024-06-10 14:07:07.788199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.504 qpair failed and we were unable to recover it. 00:38:53.504 [2024-06-10 14:07:07.788442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.788456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.788691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.788704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.788885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.788898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.789069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.789083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.789319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.789332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.789620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.789634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.789935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.789949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.790220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.790234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.790408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.790423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.790604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.790618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.790881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.790894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.791116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.791129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.791384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.791397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.791572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.791594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.791926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.791940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.792156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.792169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.792392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.792406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.792568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.792587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.792885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.792898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.793186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.793200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.793435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.793452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.793753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.793768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.794102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.794116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.794404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.794417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.794660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.794674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.794894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.794907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.795146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.795159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.795395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.795408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.795628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.795641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.795879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.795892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.796196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.796209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.796438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.796451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.796666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.796679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.796848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.796863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.797170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.797184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.797415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.797429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.797659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.797673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.797905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.797918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.798082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.798095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.798350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.798364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.798674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.798687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.798933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.798946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.799130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.799144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.799457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.799471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.799709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.799723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.799964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.799978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.800206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.800220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.800569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.800586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.800891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.800904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.505 qpair failed and we were unable to recover it. 00:38:53.505 [2024-06-10 14:07:07.801070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.505 [2024-06-10 14:07:07.801084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.801417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.801430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.801737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.801750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.801983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.801997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.802234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.802248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.802479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.802492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.802722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.802736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.802987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.803000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.803262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.803275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.803422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.803435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.803655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.803668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.803818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.803832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.804142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.804155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.804456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.804470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.804707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.804721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.805080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.805093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.805327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.805341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.805513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.805527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.805695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.805709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.805961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.805974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.806213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.806227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.806446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.806460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.806628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.806642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.806876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.806889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.807183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.807196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.807419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.807432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.807602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.807616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.807843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.807856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.808099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.808112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.808292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.808305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.808591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.808605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.808769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.808782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.809067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.809081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.809248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.809262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.809412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.809426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.809561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.809574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.809763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.809776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.809941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.809955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.810143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.810158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.810402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.810416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.810585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.810599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.810886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.810899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.811118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.811132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.811424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.811438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.811669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.811683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.811868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.811881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.812118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.812131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.812323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.812336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.812567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.812594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.812895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.812909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.506 [2024-06-10 14:07:07.813242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.506 [2024-06-10 14:07:07.813256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.506 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.813564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.813583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.813903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.813917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.814153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.814166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.814384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.814398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.814563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.814584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.814755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.814768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.814949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.814963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.815205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.815218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.815456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.815470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.815778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.815791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.816022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.816035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.816199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.816212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.816429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.816443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.816728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.816741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.816981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.816995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.817156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.817170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.817427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.817441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.817678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.817692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.817875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.817888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.818174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.818187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.818403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.818417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.818652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.818666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.818974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.818988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.819208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.819222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.819454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.819467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.819754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.819768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.819998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.820012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.820231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.820247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.820558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.820571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.820816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.820830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.821144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.821157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.821485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.821498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.821739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.821753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.821997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.822011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.822274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.822288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.822420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.822433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.822556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.822570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.822739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.822753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.822938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.822951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.823259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.823273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.823455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.823468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.823691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.823705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.823937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.823951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.824059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.824072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.824317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.824331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.824638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.824652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.824888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.824902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.825071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.825084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.825249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.825262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1649415 Killed "${NVMF_APP[@]}" "$@" 00:38:53.507 [2024-06-10 14:07:07.825442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.825457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.507 qpair failed and we were unable to recover it. 00:38:53.507 [2024-06-10 14:07:07.825678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.507 [2024-06-10 14:07:07.825691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.825929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.825943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:53.508 [2024-06-10 14:07:07.826182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.826197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.826431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.826446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:53.508 [2024-06-10 14:07:07.826624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.826639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:53.508 [2024-06-10 14:07:07.826861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.826876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.827044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.827059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:53.508 [2024-06-10 14:07:07.827297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.827311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:53.508 [2024-06-10 14:07:07.827469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.827484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.827714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.827728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.827894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.827907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.828661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.828685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.828953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.828967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.829278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.829291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.829460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.829473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.829717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.829730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.830018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.830032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.830273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.830286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.830456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.830470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.830706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.830719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.830886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.830899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.831134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.831147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.831360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.831374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.831538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.831552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.831822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.831836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.831990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.832004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.832243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.832257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1650294 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1650294 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1650294 ']' 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:53.508 [2024-06-10 14:07:07.834415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.834442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 14:07:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:53.508 [2024-06-10 14:07:07.835570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.835602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.835882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.835895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.836062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.836076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.836271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.508 [2024-06-10 14:07:07.836314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.508 qpair failed and we were unable to recover it. 00:38:53.508 [2024-06-10 14:07:07.836638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.836679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.836905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.836918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.837179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.837193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.837431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.837444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.837729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.837742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.838019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.838059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.838386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.838426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.838791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.838832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.839122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.839135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.839363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.839376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.839681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.839693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.839979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.839992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.840167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.840179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.840415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.840428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.840633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.840646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.840934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.840946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.841233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.841245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.841425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.841437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.841704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.841721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.841956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.841968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.842063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.842075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.842392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.842404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.842584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.842597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.842855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.842868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.843133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.843145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.843382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.843394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.843693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.843705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.843823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.843835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.844028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.844040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.844315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.844327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.844552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.844565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.844843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.844855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.845083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.845096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.845400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.845412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.845628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.845641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.845810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.845822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.845981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.845994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.846167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.846179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.846422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.846434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.846718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.846731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.847045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.847057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.847220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.847232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.847552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.509 [2024-06-10 14:07:07.847565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.509 qpair failed and we were unable to recover it. 00:38:53.509 [2024-06-10 14:07:07.847806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.847819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.848072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.848085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.848249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.848261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.848439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.848451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.848739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.848752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.848971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.848984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.849199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.849212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.849464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.849477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.849703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.849716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.850003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.850016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.850202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.850214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.850373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.850385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.850547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.850560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.850847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.850860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.851032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.851044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.851330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.851344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.851572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.851591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.851884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.851897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.852119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.852131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.852385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.852397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.852674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.852687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.852854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.852866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.853184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.853197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.853485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.853497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.853678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.853690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.853875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.853887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.854041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.854053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.854210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.854223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.854462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.854474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.854784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.854797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.854923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.854935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.855102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.855114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.855401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.855414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.855629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.855642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.855804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.855817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.856004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.856017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.856141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.856153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.856382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.856394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.856566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.856585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.856768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.856780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.857093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.857105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.857276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.857288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.857548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.857560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.857828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.857840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.857946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.857959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.858184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.858196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.510 qpair failed and we were unable to recover it. 00:38:53.510 [2024-06-10 14:07:07.858500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.510 [2024-06-10 14:07:07.858513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.858768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.858780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.859025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.859037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.859269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.859281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.859514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.859526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.859770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.859782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.860026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.860038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.860154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.860166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.860408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.860421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.860654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.860670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.860959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.860971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.861189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.861201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.861444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.861457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.861685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.861698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.861913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.861926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.862090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.862102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.862430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.862442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.862765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.862778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.862962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.862974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.863219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.863231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.863466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.863479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.863711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.863724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.863951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.863991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.864292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.864332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.864511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.864552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.864869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.864910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.865138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.865150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.865439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.865452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.865674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.865686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.866002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.866014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.866248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.866260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.866514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.866526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.866758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.866771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.866955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.866967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.867086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.867098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.867359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.867372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.867493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.867505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.867613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.867627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.867845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.867857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.868147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.868160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.868336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.868349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.868583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.511 [2024-06-10 14:07:07.868595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.511 qpair failed and we were unable to recover it. 00:38:53.511 [2024-06-10 14:07:07.868846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.868858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.869045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.869057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.869363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.869375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.869486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.869499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.869732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.869744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.870034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.870046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.870278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.870290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.870528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.870542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.870708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.870721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.870942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.870954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.871121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.871133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.871377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.871389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.871546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.871558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.871829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.871842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.872000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.872039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.872283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.872322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.872641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.872681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.872859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.872899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.873071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.873083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.873204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.873244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.873530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.873569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.873953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.873993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.874269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.874309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.874485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.874524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.874848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.874889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.875125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.875165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.875484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.875523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.875816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.875828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.876054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.876066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.876306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.876319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.876482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.876495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.876744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.876785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.877602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.877625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.877806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.877819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.878042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.878054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.879177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.879199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.879545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.879599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.880594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.880614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.880948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.880961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.881210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.881249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.881552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.881609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.881810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.881808] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:38:53.512 [2024-06-10 14:07:07.881850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b9[2024-06-10 14:07:07.881861] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:0 with addr=10.0.0.2, port=4420 00:38:53.512 5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.882149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.882161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.882323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.882334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.882535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.882557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.882792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.882804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.883091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.883108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.512 [2024-06-10 14:07:07.883395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.512 [2024-06-10 14:07:07.883408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.512 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.883527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.883539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.883760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.883773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.884015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.884027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.884872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.884894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.885242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.885255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.885474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.885487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.885725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.885738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.885895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.885908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.886127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.886139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.886355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.886367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.886515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.886527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.886688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.886702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.886891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.886903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.887073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.887086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.887307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.887320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.887486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.887498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.888420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.888441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.888730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.888743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.889120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.889134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.889983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.890007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.890328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.890342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.890611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.890654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.890993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.891032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.891313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.891325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.891552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.891564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.891860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.891872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.892114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.892127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.892346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.892359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.892624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.892636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.892855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.892867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.893718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.893740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.894071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.894084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.894329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.894371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.894660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.894703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.894936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.894948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.895205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.895217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.895536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.895548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.895714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.895727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.895964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.895980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.896207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.896220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.896448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.896461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.896689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.896701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.896871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.896883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.897060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.897072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.897288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.897300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.897544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.513 [2024-06-10 14:07:07.897556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.513 qpair failed and we were unable to recover it. 00:38:53.513 [2024-06-10 14:07:07.897734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.897747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.898054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.898067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.898392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.898404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.898651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.898663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.898902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.898914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.899142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.899154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.899394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.899406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.899646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.899659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.899825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.899837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.900074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.900087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.900314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.900327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.900477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.900489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.900778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.900790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.901041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.901054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.901281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.901293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.901459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.901471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.901714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.901727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.901947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.901960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.902121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.902134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.902352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.902364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.902542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.902554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.902734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.902748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.902926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.902939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.903119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.903131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.903283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.903296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.903514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.903527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.903698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.903711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.903872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.903885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.904071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.904083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.904306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.904318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.904537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.904549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.904771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.904784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.905028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.905042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.905222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.905234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.905456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.905468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.905767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.905780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.906043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.906057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.906372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.906384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.906630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.906642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.906828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.906841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.907074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.907086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.907267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.907280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.907453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.907465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.907641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.907654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.907937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.907950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.908169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.908181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.908410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.908422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.908639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.908651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.908878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.908890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.909110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.909122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.909342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.909354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.909581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.909593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.514 [2024-06-10 14:07:07.909763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.514 [2024-06-10 14:07:07.909775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.514 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.910024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.910037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.910345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.910358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.910537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.910549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.910732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.910745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.910990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.911003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.911313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.911325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.911495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.911507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.911684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.911696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.911979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.911991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.912221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.912234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.912450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.912462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.912701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.912713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.912866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.912878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.912980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.912992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.913223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.913236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.913475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.913487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.913704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.913716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.913955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.913967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.914136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.914148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.914453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.914468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.914693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.914707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.914930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.914942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.915156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.915168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.915481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.915494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.915604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.915617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.915769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.915781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.916000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.916012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.916251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.916263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.916570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.916598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.916815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.916827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.916948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.916960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.917217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.917230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.917411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.917424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.917665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.917677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.917964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.917977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.918289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.918301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.918550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.918562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.918820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.918897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2088fc0 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.919216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.919259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.919529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.919551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.919820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.919841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7860000b90 with addr=10.0.0.2, port=4420 00:38:53.515 qpair failed and we were unable to recover it. 00:38:53.515 [2024-06-10 14:07:07.920079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.515 [2024-06-10 14:07:07.920092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.920262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.920274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.920493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.920506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.920722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.920735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.920974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.920986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.921274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.921286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.921384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.921396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.921712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.921725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.921911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.921924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.922033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.922045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.922199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.922212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.922448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.922461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.922694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.922706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.922940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.922953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.923210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.923223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.923461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.923473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.923586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.923598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.923838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.923850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.924139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.924154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.924255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.924267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.924491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.924503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.924683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.924696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.924935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.924948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.925190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.925228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.925525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.925564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.925961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.926001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.926402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.926441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.926751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.926792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.516 [2024-06-10 14:07:07.927149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.927189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.927482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.927521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.927913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.927954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.928230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.928270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.928649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.928690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.929025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.929064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.929303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.929343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.929560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.929613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.931380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.931405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.931620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.931634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.931944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.931956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.932274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.932286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.932605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.932617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.932886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.932899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.933123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.933141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.933252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.933264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.933551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.933563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.933824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.933837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.934075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.934087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.934342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.934354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.934595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.934608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.934793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.934806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.516 [2024-06-10 14:07:07.935050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.516 [2024-06-10 14:07:07.935066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.516 qpair failed and we were unable to recover it. 00:38:53.794 [2024-06-10 14:07:07.935363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.794 [2024-06-10 14:07:07.935376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.794 qpair failed and we were unable to recover it. 00:38:53.794 [2024-06-10 14:07:07.935659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.935672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.935953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.935967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.936126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.936137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.936322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.936335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.936638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.936651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.936908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.936920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.937108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.937123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.937357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.937369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.937534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.937546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.937784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.937796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.938064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.938076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.938263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.938276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.938597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.938609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.938862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.938874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.939165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.939177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.939334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.939347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.939648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.939661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.939891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.939904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.940140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.940152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.940388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.940400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.940689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.940703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.940887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.940899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.941118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.941130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.941361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.941373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.941530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.941542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.941776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.941789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.942010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.942023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.942192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.942204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.942387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.942399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.942643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.942656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.942763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.942776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.943001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.943013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.943125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.943137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.943446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.943460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.943649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.943662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.943756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.943768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.943997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.944009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.944234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.795 [2024-06-10 14:07:07.944246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.795 qpair failed and we were unable to recover it. 00:38:53.795 [2024-06-10 14:07:07.944567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.944585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.944754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.944766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.944938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.944950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.945107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.945119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.945347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.945359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.945593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.945606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.945910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.945922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.946145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.946157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.946324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.946338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.946507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.946519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.946702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.946714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.946876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.946888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.947125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.947137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.947316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.947328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.947475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.947487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.947715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.947728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.947880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.947893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.948127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.948139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.948366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.948378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.948597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.948610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.948899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.948911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.949087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.949099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.949318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.949330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.949564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.949582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.949872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.949885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.950124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.950136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.950360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.950372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.950592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.950604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.950831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.950844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.951065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.951079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.951240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.951252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.951418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.951430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.951627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.951640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.951900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.951914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.952090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.952102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.952254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.952266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.952431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.952443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.796 [2024-06-10 14:07:07.952677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.796 [2024-06-10 14:07:07.952689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.796 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.952933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.952946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.953174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.953186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.953340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.953353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.953535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.953548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.953784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.953796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.953989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.954001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.954224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.954236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.954474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.954486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.954708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.954720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.954937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.954949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.955110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.955124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.955347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.955359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.955501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.955513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.955824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.955836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.956068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.956080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.956320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.956333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.956506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.956518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.956767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.956779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.956936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.956948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.957100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.957112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.957341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.957353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.957641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.957653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.957859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.957871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.958046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.958058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.958239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.958251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.958536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.958550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.958739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.958751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.958973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.958985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.959211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.959224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.959435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.959447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.959679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.959691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.959930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.959943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.960103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.960115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.960258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.960271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.960563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.960585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.960748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.960760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.960903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.960915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.797 [2024-06-10 14:07:07.961143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.797 [2024-06-10 14:07:07.961155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.797 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.961379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.961391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.961611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.961624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.961772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.961784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.961954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.961966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.962306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.962318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.962540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.962552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.962730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.962742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.962899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.962911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.963064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.963077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.963244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.963256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.963415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.963426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.963631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.963644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.963882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.963896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.964139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.964151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.964301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.964313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.964529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.964541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.964754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.964766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.965008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.965021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.965170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.965182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.965489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.965502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.965756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.965769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.965920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.965932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.966168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.966180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.966349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.966361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.966513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.966525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.966708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.966720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.966940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.966952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.967168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.967181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.967351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.967363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.967490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.967502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.967733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.967746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.968006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.968019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.968254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.968266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.968492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.968504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.968721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.968734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.968851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.968863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.969013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.969026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.798 [2024-06-10 14:07:07.969191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.798 [2024-06-10 14:07:07.969203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.798 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.969367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.969378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.969628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.969641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.969831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.969843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.970073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.970086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.970269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.970282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.970515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.970527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.970745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.970758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.971038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.971051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.971212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.971224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.971534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.971546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.971776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.971789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.971940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.971953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.972206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.972219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.972493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.972505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.972733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.972747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.972906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.972918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.973066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.973078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.973240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.973253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.973468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.973480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.973651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.973664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.973881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.973892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.974194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.974206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.974323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.974335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.974570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.974589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.974922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.974934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.975097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.975109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.975279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.975291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.975509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.975521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.975742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.975755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.975901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.975913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.976153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.976166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.799 [2024-06-10 14:07:07.976348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.799 [2024-06-10 14:07:07.976360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.799 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.976594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.976607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.976856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.976869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.977107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.977120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.977277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.977289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.977524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.977536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.977766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.977778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.977999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.978011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.978234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.978246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.978482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.978494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.978679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.978691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.978857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.978869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.979117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.979130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.979328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.979340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.979440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.979451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.979706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.979719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.979899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.979911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.980127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.980139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.980365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.980377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.980613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.980626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.980845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.980857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.980984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.980996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.981238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.981251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.981538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.981554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.981826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.981839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.982015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.982027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.982215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.982227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.982397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.982409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.982653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.982667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.982853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.982866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.983020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.983032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.983193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.983205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.983503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.983515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.983774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.983786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.984037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.984049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.984202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.984214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.984366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.984379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.800 [2024-06-10 14:07:07.984605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.800 [2024-06-10 14:07:07.984618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.800 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.984793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.984805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.985040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.985053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.985272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.985284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.985451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.985463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.985630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.985643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.985965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.985977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.986093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.986106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.986258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.986269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.986557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.986569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.986680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.986692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.986924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.986937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.987154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.987166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.987420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.987432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.987663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.987676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.987967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.987980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.988195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.988208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.988358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.988371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.988535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.988548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.988765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.988778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.989016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.989031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.989346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.989361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.989516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.989528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.989839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.989851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.989962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.989974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.990138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.990150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.990387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.990402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.990590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.990603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.990784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.990801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.991038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.991052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.991322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.991335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.991484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.991497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.991735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.991747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.991920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.991932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.992198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.992211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.992500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.992512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.992747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.992759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.993003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.993015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.993223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.801 [2024-06-10 14:07:07.993236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.801 qpair failed and we were unable to recover it. 00:38:53.801 [2024-06-10 14:07:07.993366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.993378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.993498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.993509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.993828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.993842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.994003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.994015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.994324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.994337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.994573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.994590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.994702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.994714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.994874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.994886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.995044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.995056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.995348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.995360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.995646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.995658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.995918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.995930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.996097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.996110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.996350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.996362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.996604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.996616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.996756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:53.802 [2024-06-10 14:07:07.996796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.996809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.996961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.996973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.997230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.997242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.997461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.997473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.997692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.997705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.997935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.997947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.998107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.998120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.998390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.998402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.998562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.998574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.998814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.998827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.999059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.999071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.999251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.999263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.999499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.999512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.999703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.999716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:07.999881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:07.999895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:08.000124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:08.000136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:08.000287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:08.000299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:08.000541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:08.000554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:08.000787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:08.000800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:08.000963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:08.000976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.802 qpair failed and we were unable to recover it. 00:38:53.802 [2024-06-10 14:07:08.001219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.802 [2024-06-10 14:07:08.001231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.001463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.001475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.001769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.001783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.002103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.002116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.002270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.002283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.002538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.002551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.002713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.002726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.002890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.002902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.003125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.003137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.003378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.003392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.003657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.003670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.003841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.003854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.004024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.004037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.004189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.004202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.004367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.004380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.004627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.004640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.004882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.004899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.005116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.005132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.005314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.005326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.005478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.005492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.005663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.005676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.005898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.005912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.006155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.006169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.006319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.006332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.006624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.006640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.006867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.006880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.006983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.006995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.007230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.007243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.007469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.007481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.007703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.007716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.007934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.007946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.008171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.008184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.008357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.008372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.008567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.008583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.008817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.008829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.009058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.009071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.009412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.009424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.009714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.803 [2024-06-10 14:07:08.009727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.803 qpair failed and we were unable to recover it. 00:38:53.803 [2024-06-10 14:07:08.009887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.009899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.010074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.010087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.010259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.010271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.010488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.010500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.010809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.010822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.010994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.011006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.011106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.011119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.011352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.011364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.011538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.011550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.011810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.011823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.012045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.012057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.012292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.012304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.012453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.012465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.012641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.012654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.012968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.012981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.013213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.013225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.013402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.013414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.013586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.013598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.013819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.013832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.013985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.013997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.014255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.014267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.014540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.014552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.014724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.014737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.014907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.014919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.015205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.015217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.015438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.015450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.015687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.015700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.015954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.015967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.016148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.016160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.016348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.016360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.016534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.016546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.016704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.016716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.016954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.804 [2024-06-10 14:07:08.016967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.804 qpair failed and we were unable to recover it. 00:38:53.804 [2024-06-10 14:07:08.017134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.017147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.017370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.017385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.017548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.017560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.017796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.017809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.017959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.017972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.018283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.018296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.018395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.018408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.018654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.018666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.018892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.018905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.019156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.019168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.019458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.019471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.019689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.019702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.019923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.019935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.020118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.020131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.020384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.020396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.020640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.020653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.020749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.020762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.020939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.020951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.021180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.021192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.021413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.021425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.021608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.021622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.021846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.021858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.021965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.021977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.022263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.022275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.022454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.022466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.022641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.022654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.022837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.022849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.023037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.023049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.023339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.023351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.023639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.023652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.023884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.023897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.024183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.024195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.024429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.024441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.024731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.024744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.024981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.024993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.025278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.025290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.025584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.025596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.025818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.025830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.805 qpair failed and we were unable to recover it. 00:38:53.805 [2024-06-10 14:07:08.026048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.805 [2024-06-10 14:07:08.026061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.026346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.026358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.026622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.026635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.026924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.026939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.027124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.027137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.027444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.027456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.027711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.027723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.028032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.028044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.028356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.028369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.028602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.028615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.028812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.028824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.028984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.028997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.029231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.029243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.029529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.029541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.029769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.029782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.030037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.030049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.030359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.030371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.030551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.030563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.030737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.030751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.030921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.030933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.031170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.031183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.031493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.031506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.031701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.031714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.031883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.031896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.032152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.032164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.032331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.032343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.032601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.032614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.032871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.032884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.033056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.033068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.033354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.033366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.033545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.033557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.033783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.033797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.034035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.034048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.034202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.034215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.034373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.034385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.034615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.034627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.034914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.034926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.035093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.806 [2024-06-10 14:07:08.035105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.806 qpair failed and we were unable to recover it. 00:38:53.806 [2024-06-10 14:07:08.035395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.035408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.035588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.035601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.035836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.035853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.036146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.036162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.036385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.036403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.036631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.036650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.036950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.036967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.037155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.037169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.037333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.037348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.037579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.037597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.037781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.037795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.038036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.038052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.038224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.038239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.038478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.038494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.038732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.038746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.038987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.039001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.039232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.039245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.039428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.039440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.039652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.039665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.039925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.039938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.040116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.040130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.040295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.040308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.040619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.040633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.040870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.040884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.041119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.041132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.041368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.041381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.041598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.041611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.041868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.041881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.042062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.042075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.042314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.042327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.042552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.042565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.042810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.042822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.043006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.043019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.043193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.043205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.043372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.043385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.043540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.043553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.043718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.043731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.043959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.807 [2024-06-10 14:07:08.043972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.807 qpair failed and we were unable to recover it. 00:38:53.807 [2024-06-10 14:07:08.044221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.044234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.044364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.044377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.044488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.044500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.044686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.044699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.044865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.044878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.045164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.045178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.045415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.045427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.045589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.045605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.045826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.045838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.046139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.046152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.046309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.046321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.046423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.046435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.046669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.046681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.046928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.046940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.047114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.047125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.047428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.047441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.047613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.047626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.047793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.047806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.048042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.048053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.048208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.048220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.048505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.048517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.048679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.048691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.048874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.048886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.049146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.049158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.049389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.049401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.049580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.049593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.049897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.049909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.050157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.050169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.050347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.050359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.050481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.050493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.050724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.050737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.050842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.050854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.051139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.808 [2024-06-10 14:07:08.051151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.808 qpair failed and we were unable to recover it. 00:38:53.808 [2024-06-10 14:07:08.051324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.051336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.051590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.051603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.051770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.051782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.052055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.052067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.052303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.052314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.052597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.052610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.052858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.052872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.053109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.053122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.053288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.053300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.053471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.053483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.053719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.053731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.053971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.053983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.054290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.054302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.054532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.054545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.055089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.055118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.055380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.055393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.055688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.055700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.056009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.056021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.056310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.056323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.056559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.056572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.056735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.056747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.056868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.056879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.057135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.057146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.057375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.057387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.057607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.057620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.057768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.057782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.057935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.057947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.058164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.058176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.058397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.058409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.058647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.058660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.058919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.058932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.059154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.059167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.059403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.059416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.059643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.059667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.059886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.059898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.060128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.060140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.060378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.060390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.060563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.809 [2024-06-10 14:07:08.060580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.809 qpair failed and we were unable to recover it. 00:38:53.809 [2024-06-10 14:07:08.060820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.060833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.061061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.061073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.061225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.061237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.061474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.061487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.061662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.061675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.061902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.061915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.062159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.062171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.062425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.062437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.062558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.062569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.062734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.062746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.062987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.063000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.063182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.063194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.063509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.063521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.063747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.063760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.063928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.063940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.064208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.064221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.064392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.064406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.064693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.064706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.064855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.064867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.065153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.065165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.065398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.065411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.065584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.065597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.065782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.065794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.066085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.066097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.066265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.066277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.066512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.066524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.066685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.066697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.066916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.066929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.067183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.067195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.067424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.067436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.067670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.067682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.067904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.067916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.068174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.068186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.068474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.068486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.068730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.068743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.068998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.069010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.069229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.069241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.069427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.810 [2024-06-10 14:07:08.069439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.810 qpair failed and we were unable to recover it. 00:38:53.810 [2024-06-10 14:07:08.069666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.069678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.069917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.069929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.070180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.070192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.070442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.070454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.070734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.070747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.070933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.070946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.071180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.071192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.071489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.071501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.071763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.071775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.072001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.072014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.072257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.072269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.072498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.072510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.072689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.072702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.072862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.072874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.073111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.073124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.073285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.073297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.073542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.073555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.073777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.073789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.074028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.074042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.074339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.074351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.074593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.074605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.074758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.074771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.075081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.075094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.075267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.075279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.075555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.075568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.075767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.075780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.076005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.076017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.076239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.076251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.076484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.076496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.076743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.076755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.076895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.076907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.077160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.077172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.077356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.077369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.077541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.077553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.077728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.077740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.077959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.077971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.078152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.078164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.078325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.811 [2024-06-10 14:07:08.078337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.811 qpair failed and we were unable to recover it. 00:38:53.811 [2024-06-10 14:07:08.078600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.078613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.078847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.078860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.079078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.079090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.079132] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:53.812 [2024-06-10 14:07:08.079168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:53.812 [2024-06-10 14:07:08.079181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:53.812 [2024-06-10 14:07:08.079193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:53.812 [2024-06-10 14:07:08.079203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:53.812 [2024-06-10 14:07:08.079253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.079265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.079422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.079434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.079324] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:38:53.812 [2024-06-10 14:07:08.079359] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:38:53.812 [2024-06-10 14:07:08.079470] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:38:53.812 [2024-06-10 14:07:08.079675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.079687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.079469] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:38:53.812 [2024-06-10 14:07:08.079912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.079924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.080067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.080079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.080339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.080353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.080527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.080540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.080778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.080792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.080993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.081006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.081233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.081247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.081418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.081431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.081618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.081631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.081854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.081868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.081992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.082005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.082188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.082203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.082434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.082448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.082608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.082622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.082894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.082907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.083081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.083095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.083382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.083395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.083557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.083570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.083808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.083822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.083971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.083984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.084207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.084220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.084455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.084468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.084785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.084799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.084963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.084977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.085132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.085145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.812 [2024-06-10 14:07:08.085361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.812 [2024-06-10 14:07:08.085374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.812 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.085616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.085630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.085798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.085811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.086095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.086108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.086316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.086330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.086506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.086519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.086768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.086782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.086941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.086955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.087175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.087188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.087405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.087418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.087569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.087587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.087830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.087844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.088096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.088110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.088333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.088346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.088516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.088530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.088679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.088693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.088999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.089013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.089249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.089263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.089435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.089449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.089625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.089639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.089857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.089871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.090128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.090142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.090315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.090329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.090571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.090593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.090827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.090840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.091072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.091085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.091352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.091372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.091542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.091555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.091832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.091846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.092077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.092091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.092283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.092297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.092464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.092478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.813 [2024-06-10 14:07:08.092649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.813 [2024-06-10 14:07:08.092664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.813 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.092894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.092907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.093124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.093139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.093302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.093316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.093431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.093446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.093636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.093651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.093832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.093847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.094064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.094079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.094307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.094323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.094543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.094559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.094733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.094748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.094989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.095004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.095242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.095257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.095438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.095452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.095588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.095603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.095824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.095839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.096125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.096140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.096304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.096319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.096479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.096493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.096714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.096730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.096988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.097002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.097248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.097264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.097438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.097453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.097700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.097716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.097890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.097905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.098137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.098151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.098380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.098396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.098558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.098572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.098731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.098746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.098897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.098911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.099104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.099120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.099288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.099301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.099629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.099644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.099931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.099945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.100095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.100113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.100294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.100308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.100477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.100491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.100655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.100669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.814 [2024-06-10 14:07:08.100962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.814 [2024-06-10 14:07:08.100978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.814 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.101094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.101108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.101342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.101355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.101616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.101631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.101918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.101933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.102169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.102184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.102337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.102351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.102523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.102536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.102770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.102784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.102951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.102965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.103191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.103204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.103437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.103451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.103749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.103764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.103924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.103937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.104148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.104163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.104448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.104463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.104643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.104657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.104798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.104811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.105041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.105056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.105308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.105323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.105579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.105594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.105824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.105837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.106149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.106164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.106478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.106491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.106673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.106687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.106868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.106881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.107175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.107189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.107420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.107434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.107609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.107623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.107848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.107862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.108009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.108022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.108238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.108253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.108407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.108420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.108772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.108787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.108956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.108969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.109208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.109221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.815 qpair failed and we were unable to recover it. 00:38:53.815 [2024-06-10 14:07:08.109439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.815 [2024-06-10 14:07:08.109456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.109633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.109647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.109885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.109900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.110130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.110144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.110383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.110396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.110501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.110514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.110673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.110687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.110804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.110818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.110984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.110998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.111240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.111254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.111538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.111552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.111782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.111797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.112052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.112066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.112248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.112262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.112433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.112446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.112631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.112645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.112968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.112983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.113215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.113229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.113414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.113428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.113678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.113693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.113911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.113925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.114099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.114112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.114362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.114377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.114612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.114626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.114956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.114970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.115197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.115211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.115507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.115520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.115760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.115774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.116016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.116030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.816 [2024-06-10 14:07:08.116191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.816 [2024-06-10 14:07:08.116204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.816 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.116421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.116434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.116632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.116646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.116798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.116811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.117043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.117058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.117229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.117243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.117401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.117414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.117591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.117605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.117844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.117858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.118097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.118110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.118344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.118357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.118614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.118630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.118901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.118915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.119097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.119111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.119439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.119453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.119698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.119712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.119955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.119968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.120196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.120210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.120541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.120554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.120736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.120749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.121016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.121029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.121220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.121233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.121469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.121482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.121729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.121743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.121966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.121979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.122224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.122237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.122524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.122538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.122835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.122849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.123071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.123086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.123255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.123269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.123573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.123592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.123903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.123917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.124103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.124118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.124478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.124492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.124759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.124774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.125061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.125077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.125347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.125362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.817 qpair failed and we were unable to recover it. 00:38:53.817 [2024-06-10 14:07:08.125613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.817 [2024-06-10 14:07:08.125627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.125798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.125812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.126130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.126145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.126409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.126423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.126673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.126687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.126925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.126939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.127182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.127197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.127460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.127474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.127708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.127723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.127990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.128005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.128238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.128253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.128484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.128498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.128787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.128801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.128969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.128983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.129161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.129178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.129441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.129455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.129740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.129754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.129943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.129957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.130193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.130208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.130521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.130536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.130868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.130882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.131193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.131208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.131466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.131480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.131786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.131802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.132022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.132036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.132298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.132312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.132481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.132494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.132719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.132733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.132907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.132921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.133115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.133128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.133462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.133475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.133764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.133785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.818 qpair failed and we were unable to recover it. 00:38:53.818 [2024-06-10 14:07:08.134027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.818 [2024-06-10 14:07:08.134040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.134387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.134400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.134637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.134650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.134871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.134884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.135116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.135129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.135481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.135495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.135820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.135834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.136068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.136082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.136270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.136284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.136613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.136627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.136845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.136859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.137044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.137058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.137361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.137374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.137670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.137684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.137947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.137961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.138249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.138262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.138550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.138564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.138926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.138941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.139201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.139215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.139532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.139545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.139778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.139792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.140103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.140116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.140447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.140465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.140647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.140660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.140967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.140980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.141222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.141236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.141401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.141414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.141595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.141610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.141906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.141919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.142233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.142246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.142571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.142590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.142815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.142828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.143133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.143147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.143403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.143416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.143651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.143665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.143884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.143897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.144117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.144131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.144389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.144401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.819 [2024-06-10 14:07:08.144725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.819 [2024-06-10 14:07:08.144739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.819 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.145048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.145062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.145336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.145349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.145671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.145685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.145952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.145965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.146161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.146175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.146415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.146429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.146663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.146677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.146977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.146991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.147323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.147336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.147612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.147626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.147859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.147873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.148035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.148049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.148351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.148365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.148652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.148666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.148895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.148908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.149190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.149204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.149374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.149388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.149620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.149634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.149838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.149851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.150088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.150101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.150350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.150363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.150650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.150664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.150954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.150967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.151225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.151240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.151543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.151556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.151839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.151852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.152147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.152160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.152336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.152350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.152584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.152597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.152869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.152883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.153126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.153139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.153480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.153494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.153725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.153739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.154029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.154042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.154350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.154364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.154619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.154633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.154918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.154932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.820 [2024-06-10 14:07:08.155099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.820 [2024-06-10 14:07:08.155113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.820 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.155400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.155414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.155718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.155731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.155962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.155976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.156131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.156144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.156451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.156464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.156758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.156771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.157010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.157023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.157290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.157303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.157619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.157632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.157948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.157962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.158215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.158228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.158488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.158501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.158748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.158762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.159064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.159077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.159298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.159311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.159617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.159631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.159894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.159907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.160216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.160229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.160414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.160427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.160731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.160745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.161055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.161069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.161285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.161299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.161549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.161562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.161913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.161927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.162256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.162269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.162583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.162599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.162876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.162889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.163072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.163085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.163375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.163388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.163690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.163716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.164035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.164049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.164314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.164328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.164637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.164650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.164891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.164904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.165142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.165156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.165374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.165387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.165555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.165569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.165762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.821 [2024-06-10 14:07:08.165775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.821 qpair failed and we were unable to recover it. 00:38:53.821 [2024-06-10 14:07:08.166033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.166047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.166285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.166298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.166455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.166468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.166704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.166718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.166980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.166994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.167211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.167224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.167390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.167403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.167667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.167680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.167967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.167981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.168235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.168249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.168467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.168480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.168753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.168767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.169080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.169094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.169357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.169370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.169663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.169676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.169915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.169928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.170221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.170235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.170464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.170477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.170745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.170758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.171024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.171037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.171201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.171215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.171487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.171500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.171742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.171756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.172062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.172075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.172258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.172271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.172517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.172531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.172847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.172861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.173038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.173053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.173291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.173305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.173621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.173634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.173873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.173887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.174139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.174153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.174335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.174348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.822 [2024-06-10 14:07:08.174615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.822 [2024-06-10 14:07:08.174629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.822 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.174938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.174951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.175215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.175228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.175533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.175547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.175796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.175810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.176134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.176147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.176447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.176460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.176733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.176747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.176980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.176993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.177165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.177179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.177479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.177493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.177755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.177769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.177989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.178003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.178345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.178358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.178535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.178548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.178849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.178862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.179014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.179027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.179315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.179328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.179556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.179570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.179820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.179834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.180078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.180092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.180398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.180411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.180639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.180652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.180940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.180953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.181239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.181252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.181645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.181658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.181855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.181868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.182178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.182192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.182448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.182462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.182702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.182715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.183003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.183017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.183325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.183338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.183580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.183593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.183761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.183774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.183992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.184008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.184305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.184318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.184559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.184573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.184824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.184838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.823 [2024-06-10 14:07:08.185077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.823 [2024-06-10 14:07:08.185091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.823 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.185385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.185398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.185619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.185632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.185872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.185886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.186038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.186052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.186313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.186326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.186640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.186653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.186875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.186889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.187176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.187189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.187501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.187514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.187873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.187886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.188196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.188209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.188517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.188530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.188762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.188775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.188960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.188974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.189282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.189296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.189463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.189476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.189704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.189718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.189985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.189998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.190174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.190187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.190478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.190491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.190807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.190820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.190993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.191006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.191225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.191238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.191544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.191558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.191883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.191898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.192143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.192156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.192402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.192416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.192633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.192646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.192887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.192900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.193201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.193214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.193456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.193469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.193732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.193746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.194058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.194071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.194260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.194273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.194580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.194593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.194783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.194800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.195022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.195036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.195275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.824 [2024-06-10 14:07:08.195288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.824 qpair failed and we were unable to recover it. 00:38:53.824 [2024-06-10 14:07:08.195522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.195535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.195710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.195724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.195989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.196002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.196224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.196237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.196466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.196479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.196643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.196656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.196967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.196981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.197164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.197177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.197484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.197497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.197806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.197820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.197979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.197993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.198237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.198250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.198536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.198549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.198704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.198718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.198987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.199001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.199237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.199250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.199544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.199558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.199807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.199820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.200108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.200121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.200380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.200394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.200736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.200749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.201012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.201025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.201366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.201379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.201724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.201738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.202077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.202091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.202265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.202279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.202584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.202597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.202883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.202896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.203126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.203139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.203376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.203389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.203608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.203622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.203803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.203817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.204080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.204094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.204355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.204368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.204602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.204615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.204954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.204967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.205203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.205216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.205436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.205449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.825 qpair failed and we were unable to recover it. 00:38:53.825 [2024-06-10 14:07:08.205744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.825 [2024-06-10 14:07:08.205758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.205994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.206007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.206281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.206295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.206530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.206543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.206832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.206846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.207127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.207140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.207390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.207405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.207595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.207609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.207907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.207921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.208157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.208171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.208472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.208485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.208747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.208760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.209066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.209080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.209329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.209342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.209593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.209606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.209895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.209909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.210155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.210169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.210395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.210408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.210696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.210710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.211039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.211052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.211349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.211363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.211680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.211693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.211884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.211897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.212126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.212140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.212434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.212448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.212766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.212779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.213068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.213083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.213386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.213400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.213625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.213639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.213903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.213917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.214205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.214218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.214475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.214489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.214790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.214803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.215041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.215054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.215229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.215242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.215487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.215501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.215800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.215814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.216053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.826 [2024-06-10 14:07:08.216066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.826 qpair failed and we were unable to recover it. 00:38:53.826 [2024-06-10 14:07:08.216394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.216407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.216700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.216714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.216959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.216972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.217209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.217222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.217517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.217530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.217775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.217788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.218083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.218096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.218397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.218411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.218651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.218664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.218914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.218928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.219167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.219180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.219495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.219509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.219760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.219773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.220066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.220079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.220264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.220278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.220519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.220533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.220778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.220792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.221082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.221095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.221399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.221412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.221754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.221768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.222055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.222069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.222345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.222359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.222599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.222613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.222842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.222855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.223075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.223088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.223320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.223333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.223563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.223589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.223847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.223861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.224168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.224183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.224500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.224512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.224800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.224814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.225110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.225123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.225440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.827 [2024-06-10 14:07:08.225453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.827 qpair failed and we were unable to recover it. 00:38:53.827 [2024-06-10 14:07:08.225763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.225776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.226021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.226034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.226272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.226286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.226500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.226513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.226740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.226754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.227002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.227015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.227235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.227248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.227553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.227566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.227790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.227804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.228094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.228108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.228329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.228342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.228605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.228618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.228846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.228859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.229108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.229122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.229443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.229456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.229703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.229716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.229952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.229966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.230181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.230194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.230428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.230441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.230774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.230788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.230956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.230969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.231295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.231308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.231618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.231631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.231919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.231932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.232241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.232254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.232586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.232600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.232854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.232867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.233162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.233175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.233465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.233478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.233782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.233796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.234039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.234052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.234276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.234289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.234456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.234469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.234795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.234809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.235045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.235058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.235289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.235304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.235596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.235610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.235908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.235921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.828 qpair failed and we were unable to recover it. 00:38:53.828 [2024-06-10 14:07:08.236157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.828 [2024-06-10 14:07:08.236170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.236416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.236430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.236717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.236730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.236912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.236926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.237147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.237160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.237482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.237495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.237791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.237804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.238079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.238092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.238352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.238365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.238582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.238873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.238886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.239215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.239228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.239517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.239530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.239822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.239835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.240090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.240103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.240377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.240391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.240722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.240736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.240977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.240991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.241311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.241324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.241592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.241606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.241853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.241866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.242173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.242187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.242488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.242502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.242793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.242807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.243092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.243105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.243439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.243452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.243787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.243801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.244096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.244119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.244398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.244414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.244665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.244679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.244912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.244925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.245176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.245188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.245469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.245483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.245704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.245718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.245952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.245972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.246227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.246241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.246465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.246478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.246695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.246712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.829 [2024-06-10 14:07:08.247016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.829 [2024-06-10 14:07:08.247029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.829 qpair failed and we were unable to recover it. 00:38:53.830 [2024-06-10 14:07:08.247298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.830 [2024-06-10 14:07:08.247312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:53.830 qpair failed and we were unable to recover it. 00:38:54.099 [2024-06-10 14:07:08.247563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.099 [2024-06-10 14:07:08.247585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.099 qpair failed and we were unable to recover it. 00:38:54.099 [2024-06-10 14:07:08.247899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.099 [2024-06-10 14:07:08.247913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.099 qpair failed and we were unable to recover it. 00:38:54.099 [2024-06-10 14:07:08.248096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.099 [2024-06-10 14:07:08.248109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.099 qpair failed and we were unable to recover it. 00:38:54.099 [2024-06-10 14:07:08.248370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.099 [2024-06-10 14:07:08.248384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.099 qpair failed and we were unable to recover it. 00:38:54.099 [2024-06-10 14:07:08.248644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.099 [2024-06-10 14:07:08.248657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.248883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.248897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.249128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.249141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.249476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.249489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.249780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.249793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.250051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.250064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.250301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.250315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.250631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.250645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.250908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.250921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.251232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.251245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.251530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.251543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.251801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.251815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.252119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.252132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.252403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.252416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.252675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.252688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.252975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.252989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.253305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.253318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.253624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.253638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.253857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.253871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.254158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.254171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.254458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.254471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.254757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.254771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.254950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.254963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.255202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.255215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.255433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.255447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.255731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.255744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.255979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.255992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.256170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.256183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.100 qpair failed and we were unable to recover it. 00:38:54.100 [2024-06-10 14:07:08.256449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.100 [2024-06-10 14:07:08.256462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.256818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.256831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.257092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.257105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.257365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.257378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.257653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.257667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.257973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.257990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.258223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.258236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.258572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.258598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.258851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.258865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.259177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.259191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.259496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.259509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.259754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.259768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.260010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.260023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.260343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.260356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.260626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.260640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.260933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.260946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.261122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.261135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.261401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.261415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.261732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.261745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.261990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.262003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.262280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.262293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.262599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.262613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.262860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.262874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.263109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.263122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.263309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.263322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.263619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.263633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.263809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.101 [2024-06-10 14:07:08.263822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.101 qpair failed and we were unable to recover it. 00:38:54.101 [2024-06-10 14:07:08.264150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.264163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.264402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.264415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.264651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.264665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.264870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.264883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.265198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.265212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.265524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.265537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.265790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.265804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.266021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.266035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.266298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.266311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.266587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.266600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.266798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.266811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.267050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.267064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.267311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.267325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.267569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.267588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.267860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.267873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.268119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.268132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.268300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.268313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.268632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.268645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.268887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.268902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.269076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.269090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.269340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.269353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.269593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.269606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.269792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.269805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.270054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.270068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.270418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.270431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.270659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.270672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.270889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.270903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.271214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.271228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.271536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.271549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.271818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.271831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.272033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.272046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.272265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.272279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.272533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.102 [2024-06-10 14:07:08.272546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.102 qpair failed and we were unable to recover it. 00:38:54.102 [2024-06-10 14:07:08.272868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.272882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.273138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.273151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.273485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.273498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.273728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.273741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.273966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.273979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.274241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.274255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.274497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.274511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.274779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.274793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.275080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.275094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.275261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.275275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.275455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.275468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.275703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.275716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.275939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.275953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.276292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.276306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.276547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.276561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.276755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.276769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.276999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.277012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.277190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.277203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.277514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.277527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.277759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.277773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.278045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.278058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.278357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.278370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.278680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.278693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.278958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.278972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.279159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.279172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.279523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.279539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.279827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.279840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.280102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.280115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.280373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.280386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.280670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.280684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.280878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.280891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.281178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.281191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.281433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.281446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.281682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.281696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.281936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.281950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.282207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.282220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.282526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.282539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.282782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.103 [2024-06-10 14:07:08.282796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.103 qpair failed and we were unable to recover it. 00:38:54.103 [2024-06-10 14:07:08.283075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.283088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.283374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.283388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.283565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.283584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.283832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.283846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.284039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.284052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.284267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.284281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.284536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.284549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.284790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.284803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.285116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.285129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.285396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.285409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.285574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.285592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.285815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.285828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.286091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.286104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.286340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.286354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.286596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.286610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.286831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.286844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.287031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.287044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.287280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.287294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.287544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.287557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.287867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.287880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.288113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.288126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.288440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.288453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.288686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.288700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.288964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.288977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.289214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.289228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.289459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.289473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.289717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.289730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.290017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.290033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.290253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.290265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.290430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.290443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.290731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.290745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.290980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.290993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.291164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.291177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.291343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.291356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.291684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.291698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.291921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.291934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.292152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.292166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.292423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.104 [2024-06-10 14:07:08.292437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.104 qpair failed and we were unable to recover it. 00:38:54.104 [2024-06-10 14:07:08.292657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.292672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.292933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.292947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.293235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.293248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.293473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.293487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.293769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.293784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.294048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.294062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.294345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.294359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.294673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.294687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.294974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.294987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.295141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.295154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.295495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.295509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.295692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.295706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.295872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.295886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.296173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.296186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.296358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.296371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.296704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.296717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.297027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.297041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.297224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.297237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.297542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.297556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.297856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.297870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.298156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.298169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.298398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.298411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.298627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.298640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.298829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.298842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.299060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.299073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.299382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.299395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.299705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.299718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.300006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.300019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.300264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.300277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.300494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.300509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.300745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.300759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.105 [2024-06-10 14:07:08.300945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.105 [2024-06-10 14:07:08.300959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.105 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.301246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.301259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.301491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.301504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.301739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.301752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.301996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.302013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.302310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.302324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.302634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.302648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.302960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.302978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.303323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.303336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.303554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.303567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.303752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.303765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.303995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.304009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.304243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.304256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.304543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.304556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.304787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.304801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.305040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.305053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.305240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.305254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.305500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.305513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.305735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.305749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.305969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.305982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.306294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.306307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.306594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.306608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.306917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.306930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.307161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.307175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.307459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.307472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.307775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.307788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.308025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.308038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.308200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.308214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.308547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.308560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.308805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.308818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.309125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.309138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.309374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.309387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.309698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.309712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.309967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.309980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.310293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.310306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.310537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.310550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.310863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.310877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.311136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.311149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.106 [2024-06-10 14:07:08.311462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.106 [2024-06-10 14:07:08.311477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.106 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.311777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.311791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.311954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.311967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.312211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.312224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.312556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.312569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.312835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.312848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.313135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.313149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.313370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.313383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.313678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.313692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.313982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.313995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.314216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.314230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.314515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.314528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.314817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.314830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.315070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.315083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.315376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.315390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.315690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.315703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.315934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.315948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.316209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.316222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.316509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.316523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.316781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.316794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.317108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.317122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.317398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.317412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.317598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.317612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.317899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.317912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.318227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.318241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.318524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.318537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.318838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.318851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.319150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.319163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.319547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.319560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.319867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.319881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.320202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.320216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.320524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.320537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.320826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.320839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.321014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.321028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.321352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.321366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.321625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.321639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.321875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.321888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.322119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.322132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.107 [2024-06-10 14:07:08.322469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.107 [2024-06-10 14:07:08.322483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.107 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.322784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.322797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.323059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.323074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.323381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.323394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.323613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.323627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.323912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.323925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.324212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.324225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.324538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.324551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.324874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.324888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.325174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.325187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.325419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.325432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.325741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.325755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.326064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.326078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.326341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.326354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.326663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.326677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.326911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.326924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.327097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.327111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.327343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.327356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.327664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.327677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.327906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.327920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.328163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.328176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.328488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.328501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.328809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.328823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.329077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.329091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.329422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.329435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.329668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.329681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.329937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.329950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.330164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.330177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.330509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.330522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.330850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.330863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.331092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.331105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.331437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.331450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.331732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.331746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.332060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.332073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.332363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.332376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.332536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.332549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.332809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.332823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.333059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.333072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.333299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.108 [2024-06-10 14:07:08.333312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.108 qpair failed and we were unable to recover it. 00:38:54.108 [2024-06-10 14:07:08.333556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.333569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.333882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.333895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.334202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.334215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.334503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.334518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.334828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.334842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.335147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.335160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.335468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.335481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.335793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.335807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.336111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.336125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.336314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.336327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.336581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.336595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.336881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.336894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.337163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.337176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.337483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.337496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.337804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.337818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.338104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.338117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.338405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.338419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.338658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.338673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.338979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.338993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.339230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.339244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.339582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.339596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.339938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.339952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.340182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.340195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.340424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.340438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.340594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.340607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.340826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.340839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.341063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.341076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.341319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.341333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.341620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.341634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.341924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.341937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.342246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.342260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.342543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.342556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.342805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.342819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.343106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.109 [2024-06-10 14:07:08.343120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.109 qpair failed and we were unable to recover it. 00:38:54.109 [2024-06-10 14:07:08.343417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.343430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.343731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.343745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.344061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.344074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.344382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.344395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.344707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.344721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.345029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.345042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.345329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.345342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.345587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.345601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.345906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.345919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.346208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.346221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.346509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.346522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.346809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.346823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.347153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.347167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.347472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.347486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.347777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.347791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.347977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.347990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.348277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.348290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.348556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.348569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.348876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.348890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.349204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.349217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.349527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.349540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.349772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.349786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.350094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.350108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.350420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.350433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.350722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.350735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.350972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.350985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.351272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.351285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.351571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.351589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.351921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.351935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.352268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.352281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.352626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.352639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.352943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.352957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.353205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.353218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.353526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.353540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.353774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.353787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.354013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.354026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.354351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.354366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.354627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.110 [2024-06-10 14:07:08.354641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.110 qpair failed and we were unable to recover it. 00:38:54.110 [2024-06-10 14:07:08.354863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.354875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.355187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.355200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.355510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.355523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.355849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.355863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.356116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.356130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.356366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.356379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.356596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.356610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.356916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.356929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.357213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.357226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.357535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.357549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.357884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.357898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.358186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.358199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.358385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.358399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.358566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.358582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.358890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.358903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.359188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.359201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.359511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.359524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.359691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.359704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.359919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.359932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.360225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.360239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.360571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.360590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.360899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.360912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.361167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.361180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.361412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.361426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.361740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.361753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.362071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.362084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.362439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.362453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.362713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.362727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.363040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.363054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.363321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.363334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.363596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.363609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.363863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.363877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.364168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.364181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.364350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.364364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.364701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.364715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.365000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.365014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.365275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.365288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.365593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.365607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.111 qpair failed and we were unable to recover it. 00:38:54.111 [2024-06-10 14:07:08.365920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.111 [2024-06-10 14:07:08.365935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.366165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.366178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.366410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.366424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.366669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.366683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.367000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.367013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.367311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.367325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.367567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.367590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.367742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.367755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.367925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.367938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.368191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.368204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.368434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.368447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.368677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.368691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.368960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.368974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.369287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.369301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.369657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.369671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.369925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.369938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.370247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.370260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.370518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.370531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.370765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.370778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.371109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.371122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.371433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.371446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.371781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.371795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.372090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.372103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.372387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.372401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.372709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.372722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.372951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.372964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.373277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.373291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.373550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.373563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.373793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.373807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.374119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.374132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.374343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.374357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.374685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.374699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.375012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.375025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.375340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.375353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.375641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.375655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.375914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.375928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.376165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.376178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.376445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.376458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.112 [2024-06-10 14:07:08.376699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.112 [2024-06-10 14:07:08.376712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.112 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.376896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.376909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.377147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.377162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.377394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.377407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.377716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.377729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.378046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.378060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.378373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.378386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.378626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.378640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.378946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.378959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.379193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.379206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.379445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.379459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.379740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.379753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.380062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.380075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.380334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.380347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.380634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.380647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.380957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.380971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.381230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.381244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.381553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.381566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.381811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.381824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.382128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.382142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.382372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.382385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.382639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.382653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.382918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.382932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.383258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.383271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.383560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.383573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.383797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.383811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.384110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.384123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.384452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.384465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.384773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.384787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.384937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.384951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.385279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.385292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.385627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.385640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.385933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.385947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.386236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.386249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.386504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.386518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.386832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.386845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.387094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.387107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.387282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.387295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.387528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.113 [2024-06-10 14:07:08.387541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.113 qpair failed and we were unable to recover it. 00:38:54.113 [2024-06-10 14:07:08.387826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.387840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.388079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.388092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.388400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.388413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.388725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.388740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.388922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.388936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.389152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.389165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.389495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.389508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.389815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.389829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.389978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.389991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.390318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.390331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.390640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.390654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.390889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.390903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.391138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.391151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.391439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.391452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.391730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.391744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.391975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.391988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.392298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.392311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.392532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.392545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.392858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.392872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.393188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.393201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.393427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.393440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.393698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.393712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.394030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.394044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.394329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.394342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.394572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.394590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.394826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.394839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.395057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.395070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.395231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.395244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.395552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.395565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.395763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.395776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.396072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.396085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.396348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.396361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.396671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.114 [2024-06-10 14:07:08.396685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.114 qpair failed and we were unable to recover it. 00:38:54.114 [2024-06-10 14:07:08.396940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.396953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.397186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.397199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.397514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.397527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.397810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.397824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.398113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.398126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.398362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.398376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.398661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.398675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.398975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.398989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.399310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.399323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.399608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.399621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.399864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.399879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.400187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.400200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.400486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.400499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.400809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.400823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.401053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.401066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.401340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.401354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.401663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.401676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.401906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.401919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.402229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.402243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.402556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.402569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.402925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.402938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.403174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.403187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.403408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.403421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.403606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.403619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.403931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.403945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.404163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.404176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.404488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.404501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.404809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.404823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.405085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.405098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.405387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.405401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.405713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.405726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.405970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.405983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.406291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.406305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.406546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.406560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.406780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.406794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.407084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.407097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.407384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.407397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.407723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.407737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.115 qpair failed and we were unable to recover it. 00:38:54.115 [2024-06-10 14:07:08.408027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.115 [2024-06-10 14:07:08.408041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.408283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.408296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.408562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.408580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.408889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.408903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.409157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.409171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.409459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.409472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.409731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.409745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.410079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.410092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.410406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.410419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.410728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.410742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.411052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.411066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.411378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.411391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.411695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.411711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.411932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.411945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.412205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.412218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.412527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.412541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.412760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.412773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.413036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.413049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.413339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.413353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.413638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.413652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.413979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.413992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.414286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.414299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.414607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.414621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.414953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.414966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.415273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.415286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.415552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.415565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.415832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.415846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.416175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.416188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.416475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.416488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.416774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.416788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.417034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.417047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.417306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.417320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.417604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.417618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.417931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.417944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.418230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.418244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.418552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.418565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.418925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.418938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.419261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.116 [2024-06-10 14:07:08.419274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.116 qpair failed and we were unable to recover it. 00:38:54.116 [2024-06-10 14:07:08.419451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.419464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.419720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.419733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.420020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.420033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.420342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.420355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.420588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.420602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.420889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.420903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.421232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.421246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.421489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.421502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.421755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.421769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.422059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.422072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.422403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.422416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.422635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.422648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.422957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.422970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.423267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.423280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.423528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.423544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.423762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.423775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.424035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.424048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.424360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.424373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.424682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.424696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.425008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.425021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.425301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.425314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.425630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.425644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.425953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.425966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.426182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.426195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.426459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.426473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.426688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.426701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.426984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.426997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.427297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.427310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.427631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.427645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.427954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.427967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.428266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.428279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.428496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.428509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.428724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.428738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.429055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.429068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.429322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.429335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.429592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.429606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.429892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.429905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.430191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.430204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.117 qpair failed and we were unable to recover it. 00:38:54.117 [2024-06-10 14:07:08.430517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.117 [2024-06-10 14:07:08.430531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.430845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.430858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.431144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.431156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.431375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.431388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.431689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.431702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.431940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.431954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.432261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.432275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.432449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.432463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.432762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.432775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.433085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.433098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.433332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.433345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.433598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.433612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.433898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.433911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.434227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.434240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.434543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.434556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.434801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.434814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.435121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.435137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.435356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.435369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.435652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.435666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.435906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.435919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.436135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.436148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.436447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.436459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.436738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.436751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.436936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.436949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.437179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.437192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.437525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.437538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.437848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.437862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.438156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.438169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.438456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.438469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.438772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.438785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.439095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.439108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.439414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.439428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.439741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.439754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.118 [2024-06-10 14:07:08.440062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.118 [2024-06-10 14:07:08.440075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.118 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.440311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.440324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.440637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.440650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.440937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.440950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.441257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.441271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.441561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.441579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.441889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.441902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.442187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.442200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.442514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.442528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.442838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.442851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.443164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.443177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.443447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.443460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.443781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.443795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.444147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.444160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.444439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.444452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.444701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.444714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.445023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.445036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.445255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.445268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.445583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.445597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.445827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.445840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.446129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.446143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.446428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.446441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.446775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.446788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.447122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.447137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.447366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.447379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.447673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.447687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.447918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.447931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.448261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.448274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.448572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.448588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.448894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.448908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.449216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.449230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.449483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.449496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.449733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.449746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.449981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.119 [2024-06-10 14:07:08.449994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.119 qpair failed and we were unable to recover it. 00:38:54.119 [2024-06-10 14:07:08.450304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.450318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.450630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.450643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.450902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.450916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.451205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.451218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.451527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.451541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.451851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.451865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.452172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.452186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.452362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.452375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.452663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.452677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.452915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.452928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.453230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.453243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.453490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.453503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.453806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.453820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.454069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.454082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.454330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.454344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.454660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.454673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.454906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.454920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.455230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.455243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.455550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.455563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.455877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.455891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.456151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.456164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.456394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.456407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.456652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.456665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.456845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.456858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.457159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.457172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.457434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.457447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.457737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.457751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.457978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.457991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.458280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.458294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.458629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.458645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.458862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.458875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.459095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.459108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.459395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.459408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.459702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.459715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.459954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.459967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.460260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.460274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.460585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.460599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.460815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.460828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.120 [2024-06-10 14:07:08.461063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.120 [2024-06-10 14:07:08.461077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.120 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.461400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.461413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.461710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.461724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.462035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.462049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.462287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.462300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.462609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.462623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.462838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.462851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.463126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.463139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.463427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.463440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.463675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.463689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.463908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.463921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.464219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.464233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.464565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.464583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.464890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.464903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.465198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.465211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.465514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.465527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.465773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.465786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.466036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.466050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.466277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.466290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.466567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.466584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.466800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.466813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.467117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.467130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.467415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.467428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.467746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.467759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.468078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.468092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.468402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.468415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.468702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.468716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.469001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.469014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.469256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.469270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.469565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.469582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.469892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.469905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.470209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.470224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.470527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.470540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.470852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.470866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.471158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.471171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.471428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.471441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.471728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.471741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.472049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.472063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.472216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.121 [2024-06-10 14:07:08.472229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.121 qpair failed and we were unable to recover it. 00:38:54.121 [2024-06-10 14:07:08.472473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.472486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.472797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.472810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.473065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.473078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.473370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.473384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.473687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.473701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.474022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.474035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.474349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.474363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.474624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.474638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.474856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.474869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.475155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.475168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.475463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.475477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.475803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.475817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.476061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.476074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.476384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.476397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.476614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.476628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.476812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.476826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.477082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.477095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.477396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.477409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.477716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.477730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.477956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.477970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.478254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.478267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.478515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.478528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.478820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.478833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.479120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.479133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.479391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.479404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.479661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.479674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.479964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.479977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.480287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.480300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.480518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.480531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.480708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.480722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.481032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.481046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.481284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.481297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.481515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.481530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.481872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.481885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.482178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.482191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.482477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.482490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.482807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.482820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.483168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.483182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.122 [2024-06-10 14:07:08.483515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.122 [2024-06-10 14:07:08.483528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.122 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.483864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.483877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.484116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.484130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.484437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.484451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.484760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.484774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.485068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.485081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.485333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.485346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.485606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.485619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.485858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.485871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.486186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.486199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.486504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.486518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.486734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.486748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.487008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.487021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.487326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.487339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.487647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.487661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.487967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.487980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.488221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.488234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.488463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.488477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.488697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.488710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.488926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.488939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.489195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.489208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.489524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.489538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.489797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.489811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.490144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.490157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.490402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.490415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.490718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.490731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.491040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.491054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.491312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.491325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.491545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.491559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.491829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.491843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.492093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.492106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.492413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.492426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.492713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.123 [2024-06-10 14:07:08.492727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.123 qpair failed and we were unable to recover it. 00:38:54.123 [2024-06-10 14:07:08.493015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.493028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.493281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.493294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.493553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.493566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.493879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.493892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.494058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.494071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.494409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.494422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.494758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.494771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.495065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.495079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.495323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.495336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.495653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.495666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.495972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.495986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.496295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.496308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.496615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.496629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.496869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.496882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.497100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.497113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.497449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.497462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.497697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.497710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.497944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.497958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.498197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.498210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.498496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.498509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.498820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.498833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.499142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.499155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.499458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.499472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.499707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.499721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.500008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.500022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.500259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.500272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.500570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.500588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.500809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.500822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.501067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.501083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.501300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.501314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.501603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.501617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.501853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.501867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.502176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.502189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.502502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.502515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.502818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.502831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.503112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.503125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.503442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.503455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.124 [2024-06-10 14:07:08.503671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.124 [2024-06-10 14:07:08.503685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.124 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.503899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.503912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.504167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.504181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.504377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.504390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.504740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.504753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.505059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.505072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.505318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.505331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.505636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.505650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.505937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.505951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.506245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.506258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.506504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.506517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.506734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.506748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.506984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.506998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.507302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.507315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.507532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.507545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.507877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.507891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.508185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.508198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.508479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.508493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.508710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.508724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.509006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.509019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.509270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.509283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.509569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.509587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.509895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.509908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.510241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.510255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.510492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.510506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.510813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.510828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.511158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.511172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.511476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.511489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.511755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.511769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.512073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.512087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.512370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.512384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.512697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.512713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.512956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.512970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.513276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.513290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.513509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.513522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.513819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.513832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.514157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.514171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.514458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.514473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.125 [2024-06-10 14:07:08.514741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.125 [2024-06-10 14:07:08.514755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.125 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.515085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.515100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.515387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.515400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.515659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.515672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.515978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.515991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.516284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.516298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.516517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.516531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.516794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.516807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.517095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.517109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.517349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.517363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.517693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.517707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.517947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.517960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.518244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.518257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.518581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.518595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.518861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.518875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.519117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.519130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.519447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.519461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.519749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.519762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.519944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.519958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.520255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.520269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.520541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.520554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.520789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.520802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.521027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.521041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.521216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.521230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.521468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.521483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.521723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.521737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.521968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.521982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.522217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.522231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.522560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.522574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.522814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.522828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.523044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.523058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.523373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.523386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.523676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.523690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.523897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.523913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.524083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.524096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.524407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.524421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.524706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.524720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.525008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.525021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.525186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.126 [2024-06-10 14:07:08.525199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.126 qpair failed and we were unable to recover it. 00:38:54.126 [2024-06-10 14:07:08.525437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.525451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.525784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.525797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.526017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.526031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.526368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.526381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.526642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.526657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.526911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.526925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.527109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.527123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.527484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.527498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.527759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.527773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.528005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.528018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.528316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.528329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.528624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.528638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.528857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.528871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.529156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.529169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.529464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.529478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.529802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.529816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.530076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.530089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.530402] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.530416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.530727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.530740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.530988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.531001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.531294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.531307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.531633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.531647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.531952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.531966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.532282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.532295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.532516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.532529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.532778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.532792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.532976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.532989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.533239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.533251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.533469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.533483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.533709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.533724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.533942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.533955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.534295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.534308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.534545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.534558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.534885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.534899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.535138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.127 [2024-06-10 14:07:08.535155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.127 qpair failed and we were unable to recover it. 00:38:54.127 [2024-06-10 14:07:08.535384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.535397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.535636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.535649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.535984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.535997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.536169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.536182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.536481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.536494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.536713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.536726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.536965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.536978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.537308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.537321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.537615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.537628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.537850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.537863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.538150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.538163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.538407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.538420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.538656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.538669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.538844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.538858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.539192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.539205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.539420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.539434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.539742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.539756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.540018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.540031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.540224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.540237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.540452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.540465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.540688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.540702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.541012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.541025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.541362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.541375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.541619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.541632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.541824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.541838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.542144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.542157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.542468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.542482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.542795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.542809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.543049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.543062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.543320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.543333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.543591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.543605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.543892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.543905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.544140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.544153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.544475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.544488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.544724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.544737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.544996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.545009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.545302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.128 [2024-06-10 14:07:08.545315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.128 qpair failed and we were unable to recover it. 00:38:54.128 [2024-06-10 14:07:08.545643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.545656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.545900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.545913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.546212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.546228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.546548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.546561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.546745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.546759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.546979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.546992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.547286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.547299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.547604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.547617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.547882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.547895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.548155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.548169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.548499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.548512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.548812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.548826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.549135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.549148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.549386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.549399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.549652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.549666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.549902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.549915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.550132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.550145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.550485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.550499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.550763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.550777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.551042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.551055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.551322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.551335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.551571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.551590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.551918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.551931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.552175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.552188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.552500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.552513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.552798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.552812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.553101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.553114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.553361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.553374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.553683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.553696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.554010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.554023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.554255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.554269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.554557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.554583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.554883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.554897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.555154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.555167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.555454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.555468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.555685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.555699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.556004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.556021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.556342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.556355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.129 qpair failed and we were unable to recover it. 00:38:54.129 [2024-06-10 14:07:08.556667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.129 [2024-06-10 14:07:08.556681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.130 qpair failed and we were unable to recover it. 00:38:54.130 [2024-06-10 14:07:08.557038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.130 [2024-06-10 14:07:08.557052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.130 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.557366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.557382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.557697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.557711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.557895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.557912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.558222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.558236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.558980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.559005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.559340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.559352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.559665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.559678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.559867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.559880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.560175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.560187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.560428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.560440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.560719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.560733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.560995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.561007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.561247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.561259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.561557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.561570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.561852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.561865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.562116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.562128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.562376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.562389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.562700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.562712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.562941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.562953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.563210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.563222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.563378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.563391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.563719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.563732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.564043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.564055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.394 qpair failed and we were unable to recover it. 00:38:54.394 [2024-06-10 14:07:08.564367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.394 [2024-06-10 14:07:08.564379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.564639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.564652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.564940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.564953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.565288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.565300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.565653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.565666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.565899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.565911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.566198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.566210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.566503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.566515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.566825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.566838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.567058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.567070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.567307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.567319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.567600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.567612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.567951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.567963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.568279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.568291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.568508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.568520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.568832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.568845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.569153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.569165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.569401] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.569414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.569691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.569704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.569921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.569936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.570239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.570251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.570572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.570589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.570884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.570896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.571201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.571213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.571584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.571597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.571816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.571828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.572065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.572077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.572410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.572422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.572733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.572746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.573056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.573069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.573304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.573316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.573623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.573635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.573873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.573885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.574122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.574135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.574415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.574427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.574717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.574730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.574949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.574961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.395 [2024-06-10 14:07:08.575194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.395 [2024-06-10 14:07:08.575206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.395 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.575466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.575477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.575691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.575703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.576032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.576043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.576355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.576367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.576672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.576685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.576996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.577010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.577336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.577348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.577702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.577714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.578051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.578063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.578299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.578312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.578564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.578581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.578766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.578778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.579084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.579096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.579407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.579419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.579658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.579677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.579990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.580003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.580303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.580315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.580613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.580625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.580795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.580808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.581054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.581066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.581372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.581384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.581624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.581639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.581948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.581960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.582191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.582203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.582379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.582391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.582703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.582715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.583026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.583039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.583342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.583355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.583665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.583678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.583970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.583982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.584204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.584216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.584331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.584344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.584594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.584608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.584919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.584932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.585189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.585201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.585515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.585528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.585789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.396 [2024-06-10 14:07:08.585803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.396 qpair failed and we were unable to recover it. 00:38:54.396 [2024-06-10 14:07:08.586112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.586124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.586309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.586322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.586611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.586624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.586924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.586936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.587173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.587186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.587451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.587463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.587775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.587788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.588033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.588045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.588347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.588360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.588648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.588661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.588972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.588984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.589150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.589162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.589421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.589433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.589652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.589665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.589975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.589988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.590172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.590185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.590353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.590365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.590600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.590613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.590927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.590939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.591170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.591183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.591365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.591377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.591605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.591618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.591845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.591857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.592097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.592109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.592286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.592299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.592540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.592553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.592822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.592835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.593122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.593134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.593395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.593407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.593694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.593706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.593940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.593952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.594178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.594190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.594444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.594456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.594742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.594754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.594972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.594985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.595272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.595284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.595535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.595547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.595793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.397 [2024-06-10 14:07:08.595805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.397 qpair failed and we were unable to recover it. 00:38:54.397 [2024-06-10 14:07:08.595984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.595996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.596226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.596238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.596469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.596481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.596743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.596755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.597021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.597033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.597203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.597215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.597512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.597524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.597682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.597694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.597983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.597995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.598303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.598316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.598484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.598496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.598806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.598818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.599145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.599157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.599344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.599356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.599636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.599648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.599933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.599945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.600236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.600248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.600502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.600514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.600792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.600804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.601063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.601075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.601306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.601319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.601552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.601564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.601862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.601874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.602172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.602187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.602518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.602531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.602779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.602797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.603038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.603053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.603290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.603302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.603558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.603570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.603808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.603820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.604055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.604068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.604353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.604365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.398 [2024-06-10 14:07:08.604593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.398 [2024-06-10 14:07:08.604606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.398 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.604896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.604908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.605163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.605175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.605394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.605406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.605638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.605651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.605898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.605910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.606125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.606137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.606373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.606385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.606610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.606622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.606735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.606747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.607058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.607071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.607344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.607355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.607665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.607677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.607788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.607800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.608026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.608038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.608254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.608266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.608489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.608501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.608730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.608742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.609039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.609052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.609307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.609319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.609628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.609640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.609805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.609818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.610060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.610072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.610323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.610335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.610521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.610533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.610825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.610837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.611076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.611088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.611375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.611387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.611609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.611621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.611923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.611935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.612239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.612251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.612469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.612481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.612662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.612674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.612916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.612928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.613236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.613250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.613535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.613547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.613808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.613821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.399 qpair failed and we were unable to recover it. 00:38:54.399 [2024-06-10 14:07:08.614104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.399 [2024-06-10 14:07:08.614116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.614343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.614355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.614585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.614597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.614896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.614908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.615211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.615223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.615551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.615563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.615757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.615769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.616081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.616093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.616329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.616342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.616583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.616595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.616819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.616831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.617103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.617115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.617351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.617363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.617583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.617595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.617849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.617862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.618166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.618179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.618423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.618434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.618657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.618669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.618818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.618830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.619139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.619151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.619392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.619404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.619687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.619699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.619986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.619998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.620312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.620324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.620545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.620557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.620792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.620805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.621058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.621070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.621311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.621323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.621664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.621676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.621842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.621854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.622147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.622160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.622403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.622415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.622635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.622647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.622881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.622893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.623110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.623122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.623292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.623305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.623465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.623477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.623629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.400 [2024-06-10 14:07:08.623644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.400 qpair failed and we were unable to recover it. 00:38:54.400 [2024-06-10 14:07:08.623876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.623888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.624071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.624084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.624307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.624319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.624554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.624566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.624814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.624827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.625117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.625130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.625290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.625302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.625468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.625480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.625666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.625678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.625918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.625930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.626166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.626179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.626432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.626444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.626671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.626683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.626913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.626925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.627227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.627239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.627569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.627585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.627821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.627833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.628053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.628065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.628379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.628391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.628700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.628713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.628960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.628972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.629219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.629231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.629518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.629530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.629780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.629792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.630054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.630066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.630379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.630391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.630624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.630637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.630861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.630873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.631186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.631198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.631436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.631448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.631741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.631753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.632002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.632014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.632201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.632214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.632387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.632400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.632687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.632700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.632937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.632949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.633229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.633241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.633408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.633420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.401 qpair failed and we were unable to recover it. 00:38:54.401 [2024-06-10 14:07:08.633640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.401 [2024-06-10 14:07:08.633653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.633912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.633925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.634235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.634248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.634468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.634481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.634702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.634714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.635004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.635016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.635266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.635279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.635514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.635526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.635834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.635846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.636110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.636122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.636374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.636386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.636541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.636554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.636861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.636873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.637039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.637051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.637289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.637301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.637593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.637606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.637900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.637912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.638212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.638224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.638569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.638586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.638847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.638860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.639167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.639180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.639465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.639477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.639663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.639675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.639962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.639974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.640150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.640161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.640467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.640479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.640716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.640728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.640990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.641002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.641286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.641300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.641469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.641481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.641660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.641672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.641907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.641919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.642203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.642215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.642381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.642393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.642701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.642713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.643008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.643021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.643258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.643270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.643524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.402 [2024-06-10 14:07:08.643536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.402 qpair failed and we were unable to recover it. 00:38:54.402 [2024-06-10 14:07:08.643707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.643719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.643945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.643957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.644235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.644247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.644477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.644489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.644808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.644820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.645080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.645093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.645334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.645348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.645581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.645593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.645833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.645845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.646158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.646171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.646407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.646419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.646704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.646716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.646948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.646960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.647249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.647261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.647569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.647587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.647758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.647770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.647951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.647963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.648249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.648261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.648544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.648556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.648736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.648748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.648911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.648922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.649177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.649190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.649501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.649513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.649728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.649741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.650027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.650039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.650207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.650219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.650527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.650541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.650849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.650862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.651118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.651130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.651457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.651470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.651776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.651792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.652069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.652082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.652319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.652331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.403 qpair failed and we were unable to recover it. 00:38:54.403 [2024-06-10 14:07:08.652588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.403 [2024-06-10 14:07:08.652601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.652770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.652782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.652901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.652913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.653161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.653173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.653484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.653497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.653735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.653748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.653973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.653985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.654133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.654145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.654452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.654465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.654682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.654694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.654873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.654885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.655214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.655226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.655447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.655459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.655761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.655774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.655889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.655901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.656187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.656198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.656493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.656505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.656745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.656758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.657055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.657068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.657374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.657386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.657624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.657636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.657893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.657906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.658149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.658161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.658443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.658455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.658693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.658706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.658913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.658925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.659108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.659121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.659404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.659416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.659636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.659649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.659869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.659882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.660118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.660130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.660308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.660320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.660633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.660646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.660910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.660922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.661164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.661177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.661413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.661425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.661651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.661664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.661903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.661917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.404 [2024-06-10 14:07:08.662179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.404 [2024-06-10 14:07:08.662191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.404 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.662363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.662375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.662597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.662609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.662917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.662930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.663100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.663112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.663405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.663417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.663737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.663750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.663969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.663981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.664309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.664321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.664603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.664616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.664898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.664911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.665194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.665206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.665355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.665367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.665606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.665619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.665866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.665879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.666180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.666192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.666424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.666437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.666767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.666779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.667000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.667012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.667244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.667257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.667493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.667506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.667767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.667780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.668015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.668028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.668246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.668259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.668521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.668534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.668849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.668861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.669103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.669116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.669423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.669436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.669649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.669661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.669973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.669986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.670223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.670236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.670543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.670555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.670791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.670804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.671032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.671044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.671310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.671322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.671542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.671554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.671725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.671737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.671908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.671920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.672152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.405 [2024-06-10 14:07:08.672164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.405 qpair failed and we were unable to recover it. 00:38:54.405 [2024-06-10 14:07:08.672394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.672408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.672716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.672729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.672953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.672966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.673259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.673271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.673504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.673517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.673756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.673768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.674052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.674065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.674244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.674257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.674490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.674502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.674732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.674745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.674977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.674989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.675218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.675231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.675521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.675534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.675763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.675776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.676062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.676075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.676388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.676401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.676639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.676652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.676959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.676971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.677060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.677073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.677332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.677345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.677657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.677670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.677905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.677918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.678158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.678170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.678407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.678420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.678649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.678662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.678833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.678846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.679076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.679088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.679272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.679285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.679591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.679604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.679835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.679847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.680083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.680095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.680366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.680380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.680598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.680611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.680834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.680846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.681096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.681108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.681385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.681397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.681626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.681639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.681868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.406 [2024-06-10 14:07:08.681881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.406 qpair failed and we were unable to recover it. 00:38:54.406 [2024-06-10 14:07:08.682116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.682128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.682417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.682429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.682667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.682681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.682897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.682910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.683135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.683147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.683435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.683449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.683671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.683690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.683853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.683865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.684116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.684128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.684346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.684359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.684584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.684596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.684855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.684868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.685188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.685200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.685461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.685474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.685561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.685573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.685790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.685802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.686035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.686047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.686355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.686368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.686587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.686600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.686769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.686781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.687088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.687100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.687256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.687268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.687494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.687506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.687760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.687773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.688009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.688022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.688260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.688272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.688486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.688499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.688818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.688831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.689066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.689079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.689415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.689428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.689656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.689669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.689904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.689916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.690029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.690041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.690217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.690229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.690515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.690528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.690764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.690777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.690928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.690940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.691222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.691234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.407 [2024-06-10 14:07:08.691470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.407 [2024-06-10 14:07:08.691483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.407 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.691767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.691779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.692015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.692027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.692261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.692274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.692510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.692524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.692762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.692775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.692941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.692953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.693200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.693213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.693428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.693440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.693733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.693745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.693962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.693974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.694159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.694171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.694480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.694493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.694783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.694796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.695036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.695048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.695287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.695299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.695611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.695623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.695885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.695897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.696066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.696078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.696367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.696380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.696615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.696627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.696880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.696892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.697132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.697144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.697453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.697466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.697752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.697765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.697995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.698007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.698175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.698188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.698482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.698495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.698829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.698842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.699019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.699031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.699335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.699348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.699517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.699529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.699770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.699783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.700016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.408 [2024-06-10 14:07:08.700029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.408 qpair failed and we were unable to recover it. 00:38:54.408 [2024-06-10 14:07:08.700351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.700363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.700607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.700620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.700725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.700737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.701044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.701056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.701343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.701355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.701581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.701594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.701763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.701775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.702078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.702091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.702331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.702343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.702591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.702604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.702859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.702875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.703093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.703105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.703345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.703357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.703644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.703657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.703887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.703900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.704163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.704175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.704393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.704405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.704580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.704593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.704855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.704867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.705153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.705166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.705384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.705396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.705644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.705657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.705880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.705892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.706200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.706212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.706457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.706469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.706732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.706745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.706991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.707003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.707170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.707182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.707467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.707479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.707662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.707675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.707905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.707917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.708218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.708231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.708383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.708395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.708694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.708707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.708924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.708936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.709113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.709125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.709293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.709306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.709619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.709631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.409 qpair failed and we were unable to recover it. 00:38:54.409 [2024-06-10 14:07:08.709918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.409 [2024-06-10 14:07:08.709930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.710207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.710220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.710513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.710526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.710786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.710798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.710959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.710971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.711139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.711152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.711372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.711384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.711549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.711562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.711803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.711816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.712105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.712117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.712375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.712387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.712626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.712639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.712948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.712962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.713193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.713205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.713490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.713502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.713790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.713803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.714032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.714044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.714290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.714302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.714537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.714549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.714778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.714791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.715028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.715040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.715262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.715274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.715584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.715596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.715816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.715829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.715993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.716005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.716306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.716318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.716501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.716513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.716749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.716761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.716993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.717005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.717324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.717335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.717567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.717582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.717840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.717852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.718163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.718175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.718459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.718471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.718711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.718723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.719030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.719042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.719356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.719368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.719672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.719685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.410 [2024-06-10 14:07:08.719994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.410 [2024-06-10 14:07:08.720006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.410 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.720295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.720307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.720617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.720629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.720942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.720954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.721260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.721272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.721560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.721572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.721891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.721903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.722207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.722219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.722465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.722477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.722763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.722776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.723087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.723099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.723388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.723399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.723690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.723702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.723947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.723960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.724267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.724281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.724509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.724521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.724755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.724767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.725088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.725100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.725356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.725368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.725586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.725598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.725928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.725940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.726243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.726256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.726589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.726601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.726912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.726924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.727214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.727226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.727538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.727550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.727791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.727803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.728065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.728077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.728315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.728327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.728561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.728573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.728810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.728822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.729152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.729164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.729399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.729411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.729717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.729729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.730039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.730051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.730298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.730310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.730591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.730604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.730930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.730943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.731174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.411 [2024-06-10 14:07:08.731186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.411 qpair failed and we were unable to recover it. 00:38:54.411 [2024-06-10 14:07:08.731501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.731513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.731838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.731850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.732097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.732110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.732329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.732341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.732559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.732571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.732880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.732892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.733188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.733200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.733438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.733450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.733785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.733797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.734084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.734096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.734399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.734411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.734719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.734731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.735031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.735043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.735283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.735295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.735543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.735555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.735859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.735873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.736108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.736120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.736427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.736439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.736661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.736673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.736910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.736922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.737231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.737243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.737504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.737516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.737827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.737840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.738154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.738166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.738449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.738460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.738686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.738698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.738882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.738894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.739204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.739216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.739527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.739539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.739777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.739789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.740096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.740109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.740403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.740415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.740724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.740736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.412 [2024-06-10 14:07:08.740971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.412 [2024-06-10 14:07:08.740983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.412 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.741270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.741282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.741527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.741539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.741795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.741807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.742123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.742135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.742444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.742456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.742764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.742776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.743085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.743097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.743392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.743404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.743648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.743660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.743893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.743905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.744234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.744246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.744537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.744549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.744847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.744860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.745088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.745100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.745384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.745396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.745700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.745713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.746030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.746042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.746355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.746367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.746685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.746697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.747019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.747031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.747387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.747400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.747646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.747661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.747916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.747928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:54.413 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:38:54.413 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:54.413 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:54.413 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.413 [2024-06-10 14:07:08.749274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.749299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.749623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.749637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.749944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.749957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.750267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.750280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.750519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.750532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.750770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.750782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.751021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.751033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.751264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.751276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.751535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.751547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.751861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.751874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.752162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.752175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.752494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.752506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.752726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.413 [2024-06-10 14:07:08.752740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.413 qpair failed and we were unable to recover it. 00:38:54.413 [2024-06-10 14:07:08.752852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.752866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.753176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.753188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.753424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.753437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.753722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.753736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.754054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.754067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.754295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.754307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.754553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.754566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.754843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.754855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.755092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.755104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.755334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.755346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.755573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.755592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.755903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.755916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.756159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.756171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.756475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.756487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.756655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.756668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.756981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.756994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.757170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.757182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.757436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.757449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.757682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.757694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.757980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.757992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.758269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.758282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.758587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.758600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.758916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.758928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.759194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.759206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.759488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.759501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.759721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.759733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.759990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.760002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.760247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.760259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.760484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.760497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.760735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.760747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.760932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.760946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.761201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.761214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.761450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.761462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.761716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.761729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.761897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.761908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.762153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.762165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.762477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.762490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.762759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.762773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.414 qpair failed and we were unable to recover it. 00:38:54.414 [2024-06-10 14:07:08.763005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.414 [2024-06-10 14:07:08.763017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.763169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.763181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.763362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.763375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.763614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.763628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.763849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.763861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.764125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.764138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.764384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.764397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.764684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.764696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.764984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.764996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.765192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.765204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.765493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.765505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.765809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.765822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.766085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.766099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.766387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.766399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.766694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.766707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.766993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.767005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.767234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.767246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.767507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.767519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.767831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.767843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.768152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.768165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.768461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.768472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.768693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.768705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.768884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.768896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.769181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.769193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.769411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.769423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.769671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.769683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.769857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.769869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.770170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.770182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.770450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.770462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.770693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.770706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.770982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.770994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.771244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.771256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.771542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.771554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.771669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.771682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.771972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.771985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.772154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.772166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.772397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.772410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.772565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.772582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.772842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.415 [2024-06-10 14:07:08.772855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.415 qpair failed and we were unable to recover it. 00:38:54.415 [2024-06-10 14:07:08.773043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.773055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.773223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.773235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.773423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.773436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.773608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.773620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.773790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.773803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.774031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.774043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.774263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.774275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.774517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.774530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.774787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.774799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.774966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.774978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.775312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.775324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.775493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.775505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.775778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.775790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.776046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.776060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.776340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.776353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.776534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.776547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.776767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.776780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.776942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.776954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.777188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.777200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.777511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.777525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.777690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.777704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.777982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.777994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.778227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.778240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.778528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.778540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.778738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.778750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.778917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.778929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.779096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.779109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.779290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.779303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.779613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.779626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.779915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.779928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.780081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.780094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.780263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.780275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.780491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.780503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.780672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.780685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.780853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.780865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.781102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.781114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.781266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.781279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.781537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.781550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.416 qpair failed and we were unable to recover it. 00:38:54.416 [2024-06-10 14:07:08.781833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.416 [2024-06-10 14:07:08.781846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.782008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.782021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.782265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.782278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.782436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.782448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.782604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.782616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.782841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.782854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.783079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.783091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.783335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.783348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.783666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.783678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.783872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.783884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.784069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.784081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.784332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.784344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.784630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.784643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.784973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.784986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.785234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.785246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.785537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.785552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.785820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.785833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.786026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.786041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.786219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.786232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.786533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.786546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.786785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.786798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.787108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.787120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.787302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.787314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.787597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.787610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.787746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.787759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.787989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.788003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.788317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.788329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.788619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.788632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.788867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.788879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.789072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.789085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.789389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.789400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.789639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.789651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.789847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.789859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.417 qpair failed and we were unable to recover it. 00:38:54.417 [2024-06-10 14:07:08.790048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.417 [2024-06-10 14:07:08.790060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.790317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.790329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.790643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.790657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.418 [2024-06-10 14:07:08.790894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.790907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:54.418 [2024-06-10 14:07:08.791165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.791180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.418 [2024-06-10 14:07:08.791505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.791518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.418 [2024-06-10 14:07:08.791779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.791793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.792129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.792142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.792416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.792429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.792715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.792728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.792910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.792922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.793181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.793194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.793426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.793438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.793620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.793632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.793919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.793932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.794266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.794278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.794613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.794626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.794862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.794874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.795113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.795125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.795376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.795388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.795632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.795647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.795892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.795904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.796086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.796098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.796468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.796480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.796782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.796794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.797080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.797092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.797285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.797297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.797609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.797622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.797886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.797898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.798137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.798150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.798371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.798384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.798652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.798665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.798901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.798913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.799209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.799221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.799454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.799466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.799778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.799790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.418 [2024-06-10 14:07:08.800107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.418 [2024-06-10 14:07:08.800119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.418 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.800327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.800339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.800485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.800497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.800788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.800800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.801043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.801055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.801362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.801374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.801687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.801699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.801880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.801893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.802197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.802210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.802428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.802440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.802726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.802739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.803027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.803039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.803377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.803390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.803740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.803753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.804013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.804025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.804356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.804368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.804668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.804681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.804966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.804978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.805266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.805279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.805511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.805523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.805831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.805844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.806145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.806158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.806476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.806489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.806758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.806771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.807037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.807052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.807373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.807386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.807645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.807658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.807977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.807990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.808349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.808362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.808592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.808605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.808916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.808929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.809232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.809245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.809557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.809570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.809933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.809947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.810261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.810275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.810598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.810612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.810913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.810925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.811156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.811168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.811348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.419 [2024-06-10 14:07:08.811360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.419 qpair failed and we were unable to recover it. 00:38:54.419 [2024-06-10 14:07:08.811665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.811677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.811988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.812001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.812240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.812252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.812563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.812578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.812905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.812917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.813210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.813223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.813547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.813559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 Malloc0 00:38:54.420 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.420 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:54.420 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.420 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.420 [2024-06-10 14:07:08.814476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.814499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.814819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.814833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.815146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.815159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.815381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.815393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.815636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.815648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.815943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.815956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.816238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.816250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.816558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.816571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.816906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.816918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.817176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.817188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.817461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:54.420 [2024-06-10 14:07:08.817480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.817493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.817801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.817814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.817995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.818007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.818315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.818327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.818639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.818651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.818937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.818949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.819268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.819280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.819597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.819609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.819964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.819976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.820315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.820327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.820610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.820623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.820931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.820943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.821208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.821221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.821533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.821545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.821780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.821793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.822081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.822093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.822398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.822410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.822649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.822662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.822949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.822961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.420 qpair failed and we were unable to recover it. 00:38:54.420 [2024-06-10 14:07:08.823204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.420 [2024-06-10 14:07:08.823216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.823531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.823543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.823779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.823791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.824099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.824111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.824387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.824399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.824646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.824658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.824960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.824971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.825192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.825204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.825367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.825379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.421 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:54.421 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.421 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.421 [2024-06-10 14:07:08.826016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.826036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.826353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.826367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.826670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.826683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.826946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.826961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.827215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.827227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.827520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.827532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.827756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.827769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.828066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.828078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.828313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.828326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.828658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.828670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.828956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.828968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.829207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.829219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.829509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.829521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.829821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.829833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.830151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.830163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.830474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.830486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.830726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.830739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.831045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.831057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.831343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.831355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.831604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.831616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.831924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.831937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.832164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.832176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.832507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.832519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.421 qpair failed and we were unable to recover it. 00:38:54.421 [2024-06-10 14:07:08.832814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.421 [2024-06-10 14:07:08.832827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.833119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.833132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.833430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.833442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.833733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.833746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.833966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.833978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.422 [2024-06-10 14:07:08.834215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.834227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:54.422 [2024-06-10 14:07:08.834529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.834546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.834830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.422 [2024-06-10 14:07:08.834842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.835094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.422 [2024-06-10 14:07:08.835107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.835394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.835406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.835694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.835706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.835928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.835940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.836251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.836263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.836502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.836514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.836744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.836757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.837068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.837081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.837417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.837429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.837669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.837681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.838005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.838018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.838317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.838329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.838637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.838649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.838909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.838922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.839228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.839240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.839546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.839558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.839873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.839885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.840119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.840131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.840444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.840457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.840761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.840773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.841032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.841044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.841279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.841292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.841619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.841631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.841863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.841875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.422 [2024-06-10 14:07:08.842173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.842186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.422 [2024-06-10 14:07:08.842524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.842536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 [2024-06-10 14:07:08.842775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.842788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.422 [2024-06-10 14:07:08.842956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.422 [2024-06-10 14:07:08.842969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.422 qpair failed and we were unable to recover it. 00:38:54.422 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.423 [2024-06-10 14:07:08.843277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.843289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.843583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.843595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.843835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.843847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.844150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.844162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.844447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.844459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.844685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.844697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.844916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.844928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.845156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.845168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.845493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.423 [2024-06-10 14:07:08.845505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7864000b90 with addr=10.0.0.2, port=4420 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 [2024-06-10 14:07:08.845738] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.423 [2024-06-10 14:07:08.848161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.423 [2024-06-10 14:07:08.848267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.423 [2024-06-10 14:07:08.848288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.423 [2024-06-10 14:07:08.848299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.423 [2024-06-10 14:07:08.848308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.423 [2024-06-10 14:07:08.848332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.423 qpair failed and we were unable to recover it. 00:38:54.423 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.423 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:54.423 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.423 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.681 [2024-06-10 14:07:08.858082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.858194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.858215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.858226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.858235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.858255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.681 14:07:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1649691 00:38:54.681 [2024-06-10 14:07:08.868104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.868203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.868223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.868233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.868242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.868261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.878006] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.878100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.878119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.878128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.878137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.878156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.888139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.888275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.888293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.888302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.888311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.888329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.898135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.898226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.898248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.898259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.898268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.898288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.908141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.908245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.908264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.908274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.908283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.908302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.918132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.918219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.918237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.918250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.918258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.918277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.928154] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.928250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.928268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.928277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.928286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.928304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.681 [2024-06-10 14:07:08.938181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.681 [2024-06-10 14:07:08.938300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.681 [2024-06-10 14:07:08.938319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.681 [2024-06-10 14:07:08.938329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.681 [2024-06-10 14:07:08.938337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.681 [2024-06-10 14:07:08.938355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.681 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:08.948162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:08.948248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:08.948266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:08.948275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:08.948285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:08.948303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:08.958235] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:08.958324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:08.958343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:08.958353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:08.958362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:08.958380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:08.968247] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:08.968353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:08.968372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:08.968381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:08.968390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:08.968408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:08.978320] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:08.978408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:08.978425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:08.978435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:08.978444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:08.978462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:08.988360] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:08.988455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:08.988472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:08.988482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:08.988491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:08.988510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:08.998347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:08.998437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:08.998454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:08.998463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:08.998472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:08.998490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.008357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.008450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.008471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.008480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.008489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.008507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.018424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.018517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.018535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.018544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.018553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.018571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.028400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.028489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.028506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.028516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.028524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.028542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.038448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.038546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.038563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.038572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.038586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.038604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.048535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.048703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.048722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.048732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.048741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.048763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.058610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.058694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.058712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.058722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.058730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.058748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.068606] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.068692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.682 [2024-06-10 14:07:09.068709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.682 [2024-06-10 14:07:09.068719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.682 [2024-06-10 14:07:09.068728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.682 [2024-06-10 14:07:09.068745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.682 qpair failed and we were unable to recover it. 00:38:54.682 [2024-06-10 14:07:09.078583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.682 [2024-06-10 14:07:09.078672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.078690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.078699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.078708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.078726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.088864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.088973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.088991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.089000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.089009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.089027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.098762] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.098846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.098867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.098876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.098885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.098903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.108793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.108894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.108912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.108921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.108930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.108948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.118798] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.118913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.118931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.118940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.118949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.118968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.128749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.128846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.128864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.128873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.128881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.128899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.138785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.138876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.138893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.138903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.138911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.138932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.683 [2024-06-10 14:07:09.148837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.683 [2024-06-10 14:07:09.148932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.683 [2024-06-10 14:07:09.148952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.683 [2024-06-10 14:07:09.148962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.683 [2024-06-10 14:07:09.148970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.683 [2024-06-10 14:07:09.148989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.683 qpair failed and we were unable to recover it. 00:38:54.940 [2024-06-10 14:07:09.158804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.940 [2024-06-10 14:07:09.158908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.940 [2024-06-10 14:07:09.158930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.940 [2024-06-10 14:07:09.158940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.940 [2024-06-10 14:07:09.158949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.940 [2024-06-10 14:07:09.158968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.940 qpair failed and we were unable to recover it. 00:38:54.940 [2024-06-10 14:07:09.168862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.940 [2024-06-10 14:07:09.168961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.940 [2024-06-10 14:07:09.168980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.940 [2024-06-10 14:07:09.168989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.940 [2024-06-10 14:07:09.168998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.940 [2024-06-10 14:07:09.169016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.940 qpair failed and we were unable to recover it. 00:38:54.940 [2024-06-10 14:07:09.178917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.940 [2024-06-10 14:07:09.179047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.940 [2024-06-10 14:07:09.179065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.940 [2024-06-10 14:07:09.179075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.940 [2024-06-10 14:07:09.179083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.940 [2024-06-10 14:07:09.179101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.940 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.188858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.188949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.188966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.188976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.188985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.189002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.198945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.199034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.199052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.199061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.199070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.199088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.208888] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.208979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.208996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.209006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.209014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.209032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.218996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.219081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.219099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.219108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.219117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.219135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.228962] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.229085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.229103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.229113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.229125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.229143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.239052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.239148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.239166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.239176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.239184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.239202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.249116] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.249220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.249238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.249248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.249256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.249274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.259138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.259240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.259258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.259268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.259277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.259296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.269143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.269230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.269247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.269257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.269265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.269282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.279169] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.279259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.279277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.279286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.279295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.279312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.289216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.289306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.289324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.289333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.289342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.289360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.299161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.299255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.299273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.299282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.299291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.299309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.309260] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.309348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.309366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.941 [2024-06-10 14:07:09.309376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.941 [2024-06-10 14:07:09.309384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.941 [2024-06-10 14:07:09.309402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.941 qpair failed and we were unable to recover it. 00:38:54.941 [2024-06-10 14:07:09.319356] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.941 [2024-06-10 14:07:09.319444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.941 [2024-06-10 14:07:09.319461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.319473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.319482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.319501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.329231] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.329324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.329342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.329351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.329360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.329379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.339253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.339337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.339355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.339365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.339373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.339391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.349301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.349389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.349407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.349417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.349426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.349443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.359347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.359435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.359453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.359463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.359472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.359490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.369366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.369458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.369476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.369487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.369496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.369513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.379447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.379533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.379553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.379563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.379572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.379596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.389440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.389572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.389594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.389604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.389613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.389631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.399501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.399595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.399613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.399622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.399632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.399650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:54.942 [2024-06-10 14:07:09.409470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:54.942 [2024-06-10 14:07:09.409569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:54.942 [2024-06-10 14:07:09.409600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:54.942 [2024-06-10 14:07:09.409614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:54.942 [2024-06-10 14:07:09.409622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:54.942 [2024-06-10 14:07:09.409642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:54.942 qpair failed and we were unable to recover it. 00:38:55.200 [2024-06-10 14:07:09.419558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.200 [2024-06-10 14:07:09.419663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.200 [2024-06-10 14:07:09.419685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.200 [2024-06-10 14:07:09.419695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.200 [2024-06-10 14:07:09.419704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.200 [2024-06-10 14:07:09.419723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.200 qpair failed and we were unable to recover it. 00:38:55.200 [2024-06-10 14:07:09.429694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.429782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.429800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.429809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.429818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.429837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.439595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.439686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.439704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.439714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.439723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.439741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.449594] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.449687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.449705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.449715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.449723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.449741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.459612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.459698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.459716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.459726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.459734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.459753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.469644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.469728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.469746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.469756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.469764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.469782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.479739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.479829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.479847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.479856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.479865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.479884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.489768] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.489859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.489878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.489887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.489896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.489914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.499797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.499881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.499903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.499913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.499921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.499940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.509775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.509864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.509882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.509892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.509901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.509919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.519796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.519899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.519917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.519926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.519935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.519953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.529826] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.529916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.529934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.529944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.529953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.529971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.539858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.539947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.539965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.539974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.539983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.540006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.549861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.201 [2024-06-10 14:07:09.549949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.201 [2024-06-10 14:07:09.549966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.201 [2024-06-10 14:07:09.549976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.201 [2024-06-10 14:07:09.549984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.201 [2024-06-10 14:07:09.550002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.201 qpair failed and we were unable to recover it. 00:38:55.201 [2024-06-10 14:07:09.559894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.559986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.560004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.560014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.560022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.560040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.569919] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.570009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.570026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.570036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.570045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.570062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.580027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.580115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.580132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.580142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.580151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.580169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.589965] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.590052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.590073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.590082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.590091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.590108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.600074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.600168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.600185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.600195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.600204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.600222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.610148] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.610248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.610265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.610275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.610284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.610302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.620059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.620152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.620170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.620180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.620188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.620206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.630164] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.630253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.630270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.630280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.630292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.630309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.640193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.640279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.640296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.640306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.640315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.640333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.650145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.650233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.650251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.650261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.650270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.650288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.660258] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.660344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.660361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.660371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.660380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.660398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.202 [2024-06-10 14:07:09.670262] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.202 [2024-06-10 14:07:09.670360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.202 [2024-06-10 14:07:09.670381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.202 [2024-06-10 14:07:09.670392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.202 [2024-06-10 14:07:09.670400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.202 [2024-06-10 14:07:09.670420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.202 qpair failed and we were unable to recover it. 00:38:55.460 [2024-06-10 14:07:09.680311] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.460 [2024-06-10 14:07:09.680414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.460 [2024-06-10 14:07:09.680436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.460 [2024-06-10 14:07:09.680447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.460 [2024-06-10 14:07:09.680457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.460 [2024-06-10 14:07:09.680477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.460 qpair failed and we were unable to recover it. 00:38:55.460 [2024-06-10 14:07:09.690351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.460 [2024-06-10 14:07:09.690447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.460 [2024-06-10 14:07:09.690466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.460 [2024-06-10 14:07:09.690476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.460 [2024-06-10 14:07:09.690485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.460 [2024-06-10 14:07:09.690504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.700369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.700462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.700481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.700491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.700499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.700517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.710411] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.710512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.710530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.710539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.710548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.710566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.720353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.720447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.720464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.720477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.720486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.720505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.730443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.730536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.730554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.730563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.730572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.730597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.740519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.740613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.740630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.740640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.740648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.740666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.750438] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.750528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.750545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.750555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.750564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.750586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.760521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.760618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.760636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.760646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.760655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.760673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.770539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.770630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.770648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.770658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.770667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.770685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.780518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.780615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.780632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.780642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.780651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.780669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.790542] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.790635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.790652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.790662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.790671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.790689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.800592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.800684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.800701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.800711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.800720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.800738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.810666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.810758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.810775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.810788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.810797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.810815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.820718] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.820806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.820824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.820834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.820843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.820861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.830677] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.830764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.830782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.830792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.830802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.830820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.840774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.840858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.840876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.840886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.840894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.840912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.850762] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.461 [2024-06-10 14:07:09.850851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.461 [2024-06-10 14:07:09.850870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.461 [2024-06-10 14:07:09.850881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.461 [2024-06-10 14:07:09.850890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.461 [2024-06-10 14:07:09.850908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.461 qpair failed and we were unable to recover it. 00:38:55.461 [2024-06-10 14:07:09.860839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.860927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.860945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.860955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.860964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.860982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.870883] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.870967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.870985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.870995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.871004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.871022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.880952] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.881039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.881056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.881065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.881074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.881092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.890840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.890928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.890945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.890955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.890963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.890981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.900961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.901052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.901073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.901083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.901091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.901110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.911065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.911224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.911242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.911252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.911261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.911279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.921005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.462 [2024-06-10 14:07:09.921093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.462 [2024-06-10 14:07:09.921111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.462 [2024-06-10 14:07:09.921121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.462 [2024-06-10 14:07:09.921129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.462 [2024-06-10 14:07:09.921147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.462 qpair failed and we were unable to recover it. 00:38:55.462 [2024-06-10 14:07:09.931031] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.720 [2024-06-10 14:07:09.931142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.720 [2024-06-10 14:07:09.931164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.720 [2024-06-10 14:07:09.931174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.720 [2024-06-10 14:07:09.931183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.931204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:09.941074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:09.941173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:09.941194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:09.941204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:09.941213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.941236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:09.951095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:09.951183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:09.951201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:09.951212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:09.951221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.951240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:09.961061] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:09.961148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:09.961166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:09.961176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:09.961185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.961203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:09.971080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:09.971172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:09.971190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:09.971200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:09.971209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.971227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:09.981186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:09.981273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:09.981290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:09.981300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:09.981309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.981327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:09.991220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:09.991306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:09.991326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:09.991336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:09.991344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:09.991362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:10.001202] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:10.001295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:10.001314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:10.001324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:10.001332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:10.001351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:10.011243] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:10.011361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:10.011378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:10.011389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:10.011397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:10.011415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:10.021281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:10.021374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:10.021395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:10.021408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:10.021420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:10.021443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:10.031399] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:10.031501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:10.031522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:10.031533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:10.031545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.721 [2024-06-10 14:07:10.031567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.721 qpair failed and we were unable to recover it. 00:38:55.721 [2024-06-10 14:07:10.041358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.721 [2024-06-10 14:07:10.041494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.721 [2024-06-10 14:07:10.041513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.721 [2024-06-10 14:07:10.041523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.721 [2024-06-10 14:07:10.041532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.041551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.051333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.051424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.051443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.051453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.051463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.051482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.061427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.061512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.061530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.061540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.061549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.061568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.071379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.071477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.071498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.071511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.071521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.071544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.081502] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.081611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.081630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.081640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.081648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.081667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.091501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.091595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.091614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.091624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.091632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.091651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.101505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.101606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.101625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.101635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.101644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.101662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.111535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.111628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.111648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.111658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.111667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.111686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.121928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.122026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.122047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.122058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.122071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.122092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.131652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.131743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.131761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.131772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.131781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.131799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.141687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.141776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.141794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.722 [2024-06-10 14:07:10.141804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.722 [2024-06-10 14:07:10.141812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.722 [2024-06-10 14:07:10.141831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.722 qpair failed and we were unable to recover it. 00:38:55.722 [2024-06-10 14:07:10.151641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.722 [2024-06-10 14:07:10.151729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.722 [2024-06-10 14:07:10.151747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.723 [2024-06-10 14:07:10.151757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.723 [2024-06-10 14:07:10.151766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.723 [2024-06-10 14:07:10.151785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.723 qpair failed and we were unable to recover it. 00:38:55.723 [2024-06-10 14:07:10.161707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.723 [2024-06-10 14:07:10.161800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.723 [2024-06-10 14:07:10.161819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.723 [2024-06-10 14:07:10.161830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.723 [2024-06-10 14:07:10.161839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.723 [2024-06-10 14:07:10.161858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.723 qpair failed and we were unable to recover it. 00:38:55.723 [2024-06-10 14:07:10.171761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.723 [2024-06-10 14:07:10.171870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.723 [2024-06-10 14:07:10.171888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.723 [2024-06-10 14:07:10.171898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.723 [2024-06-10 14:07:10.171907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.723 [2024-06-10 14:07:10.171926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.723 qpair failed and we were unable to recover it. 00:38:55.723 [2024-06-10 14:07:10.181814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.723 [2024-06-10 14:07:10.181906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.723 [2024-06-10 14:07:10.181924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.723 [2024-06-10 14:07:10.181934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.723 [2024-06-10 14:07:10.181942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.723 [2024-06-10 14:07:10.181960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.723 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.191833] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.191943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.191964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.191974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.191983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.192002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.201846] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.201948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.201970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.201981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.201991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.202010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.211910] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.212039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.212057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.212070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.212078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.212098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.221910] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.222001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.222019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.222029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.222037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.222056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.231937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.232021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.232039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.232050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.232059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.232077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.241953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.242041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.242059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.242068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.242077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.242095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.251977] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.252086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.252103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.252113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.252122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.252140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.262051] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.262138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.262156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.262166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.262174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.981 [2024-06-10 14:07:10.262192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.981 qpair failed and we were unable to recover it. 00:38:55.981 [2024-06-10 14:07:10.271974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.981 [2024-06-10 14:07:10.272062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.981 [2024-06-10 14:07:10.272080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.981 [2024-06-10 14:07:10.272089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.981 [2024-06-10 14:07:10.272098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.272116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.282062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.282152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.282169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.282179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.282187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.282205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.292143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.292244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.292261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.292271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.292280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.292298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.302053] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.302139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.302162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.302172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.302181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.302198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.312157] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.312239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.312257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.312267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.312275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.312294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.322196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.322308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.322326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.322335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.322344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.322362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.332174] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.332265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.332282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.332292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.332300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.332319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.342241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.342332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.342349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.342359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.342368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.342389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.352281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.352365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.352382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.352392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.352400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.352418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.362290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.362380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.362397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.362407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.362415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.362433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.372347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.372430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.372448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.372458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.372467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.372484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.382270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.382378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.382395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.382405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.382414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.382432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.392390] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.392474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.392494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.392503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.392512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.392530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.402382] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.402470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.402488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.402498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.402507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.402524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.412493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.412622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.412640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.412650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.412659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.412677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.422456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.422542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.422559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.422569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.422583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.422602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.432507] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.432602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.432620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.432630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.432641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.432659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:55.982 [2024-06-10 14:07:10.442532] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.982 [2024-06-10 14:07:10.442626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.982 [2024-06-10 14:07:10.442643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.982 [2024-06-10 14:07:10.442653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.982 [2024-06-10 14:07:10.442662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:55.982 [2024-06-10 14:07:10.442680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:55.982 qpair failed and we were unable to recover it. 00:38:56.241 [2024-06-10 14:07:10.452601] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.241 [2024-06-10 14:07:10.452701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.241 [2024-06-10 14:07:10.452722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.241 [2024-06-10 14:07:10.452732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.241 [2024-06-10 14:07:10.452741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.241 [2024-06-10 14:07:10.452761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.241 qpair failed and we were unable to recover it. 00:38:56.241 [2024-06-10 14:07:10.462620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.462717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.462738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.462748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.462757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.462776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.472557] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.472652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.472671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.472681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.472690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.472710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.482656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.482751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.482769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.482779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.482787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.482806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.492683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.492773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.492792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.492801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.492810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.492828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.502708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.502798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.502816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.502825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.502834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.502852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.512756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.512836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.512853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.512863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.512871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.512889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.522769] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.522858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.522875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.522884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.522896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.522914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.532828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.532915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.532933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.532942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.532951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.532969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.542837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.542920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.542937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.542947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.542956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.542974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.552861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.552945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.552963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.552973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.552982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.553001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.562887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.562979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.562999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.563008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.563017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.563035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.242 qpair failed and we were unable to recover it. 00:38:56.242 [2024-06-10 14:07:10.572887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.242 [2024-06-10 14:07:10.572981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.242 [2024-06-10 14:07:10.573000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.242 [2024-06-10 14:07:10.573010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.242 [2024-06-10 14:07:10.573019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.242 [2024-06-10 14:07:10.573037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.583005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.583089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.583106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.583116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.583125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.583143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.592957] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.593049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.593067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.593077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.593085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.593104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.602998] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.603112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.603128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.603138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.603147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.603166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.613030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.613113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.613130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.613143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.613152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.613171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.623088] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.623187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.623203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.623213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.623222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.623240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.633071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.633158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.633175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.633185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.633194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.633211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.643126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.643213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.643230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.643239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.643248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.643265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.653107] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.653201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.653219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.653229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.653238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.653256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.663216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.663326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.663345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.663355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.663364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.663382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.673215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.673301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.673318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.673328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.673337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.673355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.683289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.683401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.683419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.683429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.243 [2024-06-10 14:07:10.683438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.243 [2024-06-10 14:07:10.683456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.243 qpair failed and we were unable to recover it. 00:38:56.243 [2024-06-10 14:07:10.693278] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.243 [2024-06-10 14:07:10.693366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.243 [2024-06-10 14:07:10.693384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.243 [2024-06-10 14:07:10.693393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.244 [2024-06-10 14:07:10.693402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.244 [2024-06-10 14:07:10.693421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.244 qpair failed and we were unable to recover it. 00:38:56.244 [2024-06-10 14:07:10.703306] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.244 [2024-06-10 14:07:10.703387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.244 [2024-06-10 14:07:10.703408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.244 [2024-06-10 14:07:10.703418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.244 [2024-06-10 14:07:10.703426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.244 [2024-06-10 14:07:10.703445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.244 qpair failed and we were unable to recover it. 00:38:56.502 [2024-06-10 14:07:10.713265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.502 [2024-06-10 14:07:10.713361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.713382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.713392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.713401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.713420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.723365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.723464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.723485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.723495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.723504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.723523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.733392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.733483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.733501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.733511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.733519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.733537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.743420] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.743507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.743525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.743534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.743543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.743564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.753454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.753543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.753561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.753570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.753584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.753602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.763483] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.763571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.763593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.763603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.763612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.763631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.773436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.773534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.773551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.773561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.773570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.773594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.783480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.783615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.783633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.783643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.783652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.783671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.793633] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.793732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.793753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.793764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.793772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.793790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.803606] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.803694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.803711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.803720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.803729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.803747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.813641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.813732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.503 [2024-06-10 14:07:10.813750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.503 [2024-06-10 14:07:10.813760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.503 [2024-06-10 14:07:10.813768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.503 [2024-06-10 14:07:10.813787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.503 qpair failed and we were unable to recover it. 00:38:56.503 [2024-06-10 14:07:10.823685] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.503 [2024-06-10 14:07:10.823770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.823788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.823797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.823806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.823824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.833715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.833800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.833817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.833826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.833835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.833856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.843676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.843760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.843777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.843787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.843795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.843814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.853755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.853843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.853862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.853872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.853880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.853899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.863766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.863850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.863868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.863878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.863887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.863905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.873835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.873924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.873943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.873953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.873962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.873981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.883846] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.883941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.883959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.883969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.883977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.883995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.893890] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.893989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.894007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.894016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.894025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.894043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.903911] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.903997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.904015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.904026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.904034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.904052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.913954] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.914041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.914059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.914070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.914079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.914097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.923966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.924054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.924072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.924082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.924094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.504 [2024-06-10 14:07:10.924112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.504 qpair failed and we were unable to recover it. 00:38:56.504 [2024-06-10 14:07:10.933999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.504 [2024-06-10 14:07:10.934092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.504 [2024-06-10 14:07:10.934110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.504 [2024-06-10 14:07:10.934120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.504 [2024-06-10 14:07:10.934129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.505 [2024-06-10 14:07:10.934147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.505 qpair failed and we were unable to recover it. 00:38:56.505 [2024-06-10 14:07:10.944044] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.505 [2024-06-10 14:07:10.944127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.505 [2024-06-10 14:07:10.944145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.505 [2024-06-10 14:07:10.944154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.505 [2024-06-10 14:07:10.944163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.505 [2024-06-10 14:07:10.944181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.505 qpair failed and we were unable to recover it. 00:38:56.505 [2024-06-10 14:07:10.954101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.505 [2024-06-10 14:07:10.954187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.505 [2024-06-10 14:07:10.954204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.505 [2024-06-10 14:07:10.954214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.505 [2024-06-10 14:07:10.954223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.505 [2024-06-10 14:07:10.954241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.505 qpair failed and we were unable to recover it. 00:38:56.505 [2024-06-10 14:07:10.964072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.505 [2024-06-10 14:07:10.964160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.505 [2024-06-10 14:07:10.964178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.505 [2024-06-10 14:07:10.964187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.505 [2024-06-10 14:07:10.964196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.505 [2024-06-10 14:07:10.964214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.505 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:10.974123] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:10.974223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:10.974244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:10.974254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:10.974263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:10.974283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:10.984174] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:10.984289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:10.984310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:10.984320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:10.984329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:10.984348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:10.994177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:10.994268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:10.994286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:10.994296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:10.994305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:10.994324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.004204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.004297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.004315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.004325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.004334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.004352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.014237] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.014470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.014491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.014505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.014515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.014534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.024200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.024288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.024306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.024316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.024324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.024343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.034324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.034450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.034467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.034477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.034486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.034504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.044321] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.044407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.044425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.044435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.044443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.044462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.054381] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.054522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.054540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.054550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.054559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.054584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.064341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.064433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.064451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.064461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.064469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.064487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.074367] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.074457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.074474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.074484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.074492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.074511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.084440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.764 [2024-06-10 14:07:11.084613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.764 [2024-06-10 14:07:11.084631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.764 [2024-06-10 14:07:11.084641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.764 [2024-06-10 14:07:11.084649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.764 [2024-06-10 14:07:11.084668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.764 qpair failed and we were unable to recover it. 00:38:56.764 [2024-06-10 14:07:11.094638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.094752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.094770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.094780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.094789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.094807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.104621] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.104712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.104730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.104743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.104751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.104770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.114638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.114730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.114748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.114757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.114766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.114785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.124691] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.124806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.124824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.124834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.124843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.124861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.134615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.134706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.134725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.134734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.134743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.134761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.144675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.144774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.144792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.144802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.144810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.144829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.154684] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.154783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.154801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.154811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.154819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.154837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.164738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.164856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.164874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.164883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.164892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.164910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.174664] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.174762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.174779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.174789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.174798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.174815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.184738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.184823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.184841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.184851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.184860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.184878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.194758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.194869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.194890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.194901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.194909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.194927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.204803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.204918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.204936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.204946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.204955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.204973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.214854] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.214947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.765 [2024-06-10 14:07:11.214964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.765 [2024-06-10 14:07:11.214974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.765 [2024-06-10 14:07:11.214983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.765 [2024-06-10 14:07:11.215000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.765 qpair failed and we were unable to recover it. 00:38:56.765 [2024-06-10 14:07:11.224797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.765 [2024-06-10 14:07:11.224883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.766 [2024-06-10 14:07:11.224900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.766 [2024-06-10 14:07:11.224910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.766 [2024-06-10 14:07:11.224918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:56.766 [2024-06-10 14:07:11.224936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:56.766 qpair failed and we were unable to recover it. 00:38:57.024 [2024-06-10 14:07:11.234888] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.024 [2024-06-10 14:07:11.234988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.024 [2024-06-10 14:07:11.235010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.024 [2024-06-10 14:07:11.235020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.024 [2024-06-10 14:07:11.235029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.024 [2024-06-10 14:07:11.235051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.024 qpair failed and we were unable to recover it. 00:38:57.024 [2024-06-10 14:07:11.244918] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.024 [2024-06-10 14:07:11.245018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.024 [2024-06-10 14:07:11.245039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.024 [2024-06-10 14:07:11.245049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.024 [2024-06-10 14:07:11.245058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.024 [2024-06-10 14:07:11.245078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.024 qpair failed and we were unable to recover it. 00:38:57.024 [2024-06-10 14:07:11.254895] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.024 [2024-06-10 14:07:11.254988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.024 [2024-06-10 14:07:11.255006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.024 [2024-06-10 14:07:11.255016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.024 [2024-06-10 14:07:11.255024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.024 [2024-06-10 14:07:11.255044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.024 qpair failed and we were unable to recover it. 00:38:57.024 [2024-06-10 14:07:11.264995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.024 [2024-06-10 14:07:11.265098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.024 [2024-06-10 14:07:11.265117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.024 [2024-06-10 14:07:11.265127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.024 [2024-06-10 14:07:11.265136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.024 [2024-06-10 14:07:11.265155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.024 qpair failed and we were unable to recover it. 00:38:57.024 [2024-06-10 14:07:11.274969] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.024 [2024-06-10 14:07:11.275061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.024 [2024-06-10 14:07:11.275079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.024 [2024-06-10 14:07:11.275089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.024 [2024-06-10 14:07:11.275098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.024 [2024-06-10 14:07:11.275117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.024 qpair failed and we were unable to recover it. 00:38:57.024 [2024-06-10 14:07:11.285063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.024 [2024-06-10 14:07:11.285179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.024 [2024-06-10 14:07:11.285201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.024 [2024-06-10 14:07:11.285211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.024 [2024-06-10 14:07:11.285220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.024 [2024-06-10 14:07:11.285238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.024 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.295080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.295168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.295186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.295195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.295204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.295222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.305108] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.305192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.305209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.305220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.305228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.305246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.315163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.315251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.315269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.315279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.315288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.315306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.325106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.325196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.325213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.325223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.325235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.325253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.335145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.335239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.335257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.335267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.335276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.335294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.345137] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.345222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.345239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.345249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.345258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.345276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.355299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.355386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.355404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.355414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.355422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.355440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.365269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.365388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.365405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.365415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.365423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.365441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.375313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.375406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.375423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.375433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.375441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.375459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.385331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.385434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.385451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.385461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.385469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.385488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.395384] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.395472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.395490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.395500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.395508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.395526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.405382] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.405476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.405493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.405503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.405512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.405530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.415426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.025 [2024-06-10 14:07:11.415513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.025 [2024-06-10 14:07:11.415531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.025 [2024-06-10 14:07:11.415544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.025 [2024-06-10 14:07:11.415552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.025 [2024-06-10 14:07:11.415570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.025 qpair failed and we were unable to recover it. 00:38:57.025 [2024-06-10 14:07:11.425427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.425513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.425531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.425541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.425549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.425567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.026 [2024-06-10 14:07:11.435470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.435573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.435598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.435608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.435616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.435634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.026 [2024-06-10 14:07:11.445439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.445559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.445581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.445591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.445600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.445620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.026 [2024-06-10 14:07:11.455589] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.455729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.455748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.455758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.455766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.455784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.026 [2024-06-10 14:07:11.465552] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.465648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.465666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.465676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.465684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.465702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.026 [2024-06-10 14:07:11.475583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.475705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.475723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.475733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.475741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.475759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.026 [2024-06-10 14:07:11.485658] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.026 [2024-06-10 14:07:11.485775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.026 [2024-06-10 14:07:11.485793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.026 [2024-06-10 14:07:11.485802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.026 [2024-06-10 14:07:11.485811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.026 [2024-06-10 14:07:11.485829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.026 qpair failed and we were unable to recover it. 00:38:57.284 [2024-06-10 14:07:11.495592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.284 [2024-06-10 14:07:11.495686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.284 [2024-06-10 14:07:11.495708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.284 [2024-06-10 14:07:11.495718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.284 [2024-06-10 14:07:11.495727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.284 [2024-06-10 14:07:11.495746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.284 qpair failed and we were unable to recover it. 00:38:57.284 [2024-06-10 14:07:11.505689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.505869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.505890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.505904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.505913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.505934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.515728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.515827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.515845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.515855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.515863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.515882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.525712] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.525808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.525826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.525836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.525845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.525863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.535827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.535930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.535948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.535958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.535967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.535985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.545734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.545824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.545841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.545851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.545860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.545878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.555818] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.555903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.555925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.555935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.555943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.555963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.565876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.565965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.565983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.565992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.566001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.566019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.575892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.576017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.576034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.576044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.576053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.576071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.585899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.585989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.586006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.586016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.586025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.586043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.595994] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.596083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.596103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.596113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.596121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.596140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.605963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.606136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.606153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.606163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.606171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.606190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.615999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.616110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.616128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.616137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.616146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.616164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.626049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.285 [2024-06-10 14:07:11.626143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.285 [2024-06-10 14:07:11.626161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.285 [2024-06-10 14:07:11.626171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.285 [2024-06-10 14:07:11.626179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.285 [2024-06-10 14:07:11.626198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.285 qpair failed and we were unable to recover it. 00:38:57.285 [2024-06-10 14:07:11.636077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.636159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.636176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.636186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.636195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.636215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.646079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.646167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.646185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.646194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.646204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.646222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.656133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.656221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.656239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.656249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.656258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.656276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.666125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.666213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.666230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.666240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.666248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.666266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.676184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.676267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.676285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.676295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.676303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.676321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.686202] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.686318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.686342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.686353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.686362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.686381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.696258] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.696349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.696366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.696375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.696384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.696403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.706304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.706394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.706412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.706422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.706430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.706448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.716294] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.716379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.716396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.716405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.716414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.716431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.726333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.726419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.726436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.726446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.726457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.726475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.736377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.736464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.736482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.736492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.736500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.736518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.286 [2024-06-10 14:07:11.746411] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.286 [2024-06-10 14:07:11.746498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.286 [2024-06-10 14:07:11.746515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.286 [2024-06-10 14:07:11.746525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.286 [2024-06-10 14:07:11.746533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.286 [2024-06-10 14:07:11.746551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.286 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.756473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.756571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.756600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.756610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.756619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.756639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.766488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.766595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.766616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.766626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.766635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.766655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.776510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.776610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.776629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.776639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.776648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.776666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.786527] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.786619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.786637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.786647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.786655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.786674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.796556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.796645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.796663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.796673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.796681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.796699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.806566] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.806660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.806678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.806688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.806696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.806715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.816619] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.816704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.816722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.816731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.816743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.816761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.826616] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.826705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.826723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.826733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.826741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.826760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.836675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.836762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.836780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.836789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.836798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.836817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.846692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.846779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.846797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.846807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.846815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.846834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.545 [2024-06-10 14:07:11.856758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.545 [2024-06-10 14:07:11.856846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.545 [2024-06-10 14:07:11.856864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.545 [2024-06-10 14:07:11.856874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.545 [2024-06-10 14:07:11.856882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.545 [2024-06-10 14:07:11.856900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.545 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.866747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.866836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.866854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.866864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.866872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.866890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.876753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.876841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.876858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.876868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.876877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.876894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.886849] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.886956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.886974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.886984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.886992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.887011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.896822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.896923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.896940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.896950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.896959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.896978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.906878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.906961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.906979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.906991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.907000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.907017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.916907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.916989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.917007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.917017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.917025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.917043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.926917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.927006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.927023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.927033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.927041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.927059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.936961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.937052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.937070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.937079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.937088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.937106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.946969] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.947053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.947070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.947080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.947089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.947107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.956990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.957076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.957093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.957103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.957111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.957129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.967073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.967161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.967178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.967187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.967196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.967214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.977070] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.977157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.977175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.977184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.977193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.977211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.987103] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.987186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.546 [2024-06-10 14:07:11.987203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.546 [2024-06-10 14:07:11.987213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.546 [2024-06-10 14:07:11.987221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.546 [2024-06-10 14:07:11.987239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.546 qpair failed and we were unable to recover it. 00:38:57.546 [2024-06-10 14:07:11.997125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.546 [2024-06-10 14:07:11.997237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.547 [2024-06-10 14:07:11.997257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.547 [2024-06-10 14:07:11.997267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.547 [2024-06-10 14:07:11.997275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.547 [2024-06-10 14:07:11.997293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.547 qpair failed and we were unable to recover it. 00:38:57.547 [2024-06-10 14:07:12.007136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.547 [2024-06-10 14:07:12.007221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.547 [2024-06-10 14:07:12.007239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.547 [2024-06-10 14:07:12.007249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.547 [2024-06-10 14:07:12.007258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.547 [2024-06-10 14:07:12.007276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.547 qpair failed and we were unable to recover it. 00:38:57.815 [2024-06-10 14:07:12.017206] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.815 [2024-06-10 14:07:12.017344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.815 [2024-06-10 14:07:12.017365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.815 [2024-06-10 14:07:12.017375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.815 [2024-06-10 14:07:12.017384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.815 [2024-06-10 14:07:12.017403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.815 qpair failed and we were unable to recover it. 00:38:57.815 [2024-06-10 14:07:12.027218] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.815 [2024-06-10 14:07:12.027313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.815 [2024-06-10 14:07:12.027334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.815 [2024-06-10 14:07:12.027344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.815 [2024-06-10 14:07:12.027353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.815 [2024-06-10 14:07:12.027372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.815 qpair failed and we were unable to recover it. 00:38:57.815 [2024-06-10 14:07:12.037160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.815 [2024-06-10 14:07:12.037251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.815 [2024-06-10 14:07:12.037269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.815 [2024-06-10 14:07:12.037279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.815 [2024-06-10 14:07:12.037288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.815 [2024-06-10 14:07:12.037310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.047265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.047357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.047375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.047385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.047394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.047412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.057277] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.057369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.057386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.057396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.057405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.057423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.067322] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.067405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.067423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.067433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.067442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.067460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.077351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.077437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.077455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.077464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.077472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.077490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.087377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.087464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.087485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.087494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.087503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.087521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.097409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.097498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.097516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.097526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.097534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.097552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.107454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.107533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.107551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.107560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.107569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.107593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.117461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.117554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.117571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.117587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.117595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.117614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.127491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.127582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.127600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.127609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.127621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.127639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.137530] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.137622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.137640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.137649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.137658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.137676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.147599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.147685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.147703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.147713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.147721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.147739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.157598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.157679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.157697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.157707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.157716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.157734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.167624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.167711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.167728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.167738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.167746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.167764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.177630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.177725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.177743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.177753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.177761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.177779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.187696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.187778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.187796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.187805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.187813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.187832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.197719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.197823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.197840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.197850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.197858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.197877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.207728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.207813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.207830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.207840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.207849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.207866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.217760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.217848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.217865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.217875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.217886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.217904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.816 [2024-06-10 14:07:12.227813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.816 [2024-06-10 14:07:12.227901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.816 [2024-06-10 14:07:12.227918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.816 [2024-06-10 14:07:12.227928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.816 [2024-06-10 14:07:12.227937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.816 [2024-06-10 14:07:12.227955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.816 qpair failed and we were unable to recover it. 00:38:57.817 [2024-06-10 14:07:12.237795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.817 [2024-06-10 14:07:12.237879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.817 [2024-06-10 14:07:12.237896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.817 [2024-06-10 14:07:12.237906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.817 [2024-06-10 14:07:12.237915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.817 [2024-06-10 14:07:12.237933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.817 qpair failed and we were unable to recover it. 00:38:57.817 [2024-06-10 14:07:12.247872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.817 [2024-06-10 14:07:12.247990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.817 [2024-06-10 14:07:12.248007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.817 [2024-06-10 14:07:12.248017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.817 [2024-06-10 14:07:12.248026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.817 [2024-06-10 14:07:12.248044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.817 qpair failed and we were unable to recover it. 00:38:57.817 [2024-06-10 14:07:12.257934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.817 [2024-06-10 14:07:12.258026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.817 [2024-06-10 14:07:12.258044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.817 [2024-06-10 14:07:12.258054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.817 [2024-06-10 14:07:12.258062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.817 [2024-06-10 14:07:12.258080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.817 qpair failed and we were unable to recover it. 00:38:57.817 [2024-06-10 14:07:12.267920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.817 [2024-06-10 14:07:12.268004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.817 [2024-06-10 14:07:12.268022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.817 [2024-06-10 14:07:12.268031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.817 [2024-06-10 14:07:12.268040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.817 [2024-06-10 14:07:12.268058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.817 qpair failed and we were unable to recover it. 00:38:57.817 [2024-06-10 14:07:12.277863] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.817 [2024-06-10 14:07:12.277963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.817 [2024-06-10 14:07:12.277984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.817 [2024-06-10 14:07:12.277994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.817 [2024-06-10 14:07:12.278003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:57.817 [2024-06-10 14:07:12.278022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:57.817 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.287986] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.288103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.288123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.288134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.288142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.288161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.298001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.298105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.298126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.298136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.298144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.298164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.308082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.308175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.308193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.308206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.308215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.308233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.317995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.318080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.318098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.318108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.318116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.318134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.328074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.328165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.328183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.328192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.328201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.328219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.338093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.338246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.338264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.338275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.338284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.338302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.348135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.348224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.348242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.348252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.348261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.348279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.358102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.358188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.358205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.358215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.358224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.358242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.368192] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.368315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.368332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.368343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.368352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.368370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.082 qpair failed and we were unable to recover it. 00:38:58.082 [2024-06-10 14:07:12.378143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.082 [2024-06-10 14:07:12.378277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.082 [2024-06-10 14:07:12.378295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.082 [2024-06-10 14:07:12.378305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.082 [2024-06-10 14:07:12.378314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.082 [2024-06-10 14:07:12.378332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.388250] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.388338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.388355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.388365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.388374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.388392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.398287] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.398370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.398390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.398400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.398409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.398427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.408305] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.408421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.408440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.408451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.408459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.408478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.418380] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.418475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.418493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.418502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.418511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.418529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.428388] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.428485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.428503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.428513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.428521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.428540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.438344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.438467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.438484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.438495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.438503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.438527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.448441] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.448563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.448589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.448599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.448608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.448627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.458481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.458573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.458596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.458606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.458615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.458633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.468464] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.468587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.468605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.468615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.468625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.468644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.478508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.478601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.478618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.478628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.478637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.478655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.488516] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.488668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.488689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.488699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.488707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.488726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.498565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.498677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.498695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.498704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.498713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.498731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.083 [2024-06-10 14:07:12.508589] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.083 [2024-06-10 14:07:12.508676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.083 [2024-06-10 14:07:12.508693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.083 [2024-06-10 14:07:12.508703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.083 [2024-06-10 14:07:12.508712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.083 [2024-06-10 14:07:12.508731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.083 qpair failed and we were unable to recover it. 00:38:58.084 [2024-06-10 14:07:12.518596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.084 [2024-06-10 14:07:12.518685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.084 [2024-06-10 14:07:12.518703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.084 [2024-06-10 14:07:12.518712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.084 [2024-06-10 14:07:12.518721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.084 [2024-06-10 14:07:12.518739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.084 qpair failed and we were unable to recover it. 00:38:58.084 [2024-06-10 14:07:12.528641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.084 [2024-06-10 14:07:12.528754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.084 [2024-06-10 14:07:12.528772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.084 [2024-06-10 14:07:12.528782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.084 [2024-06-10 14:07:12.528790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.084 [2024-06-10 14:07:12.528812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.084 qpair failed and we were unable to recover it. 00:38:58.084 [2024-06-10 14:07:12.538662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.084 [2024-06-10 14:07:12.538753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.084 [2024-06-10 14:07:12.538770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.084 [2024-06-10 14:07:12.538780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.084 [2024-06-10 14:07:12.538789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.084 [2024-06-10 14:07:12.538807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.084 qpair failed and we were unable to recover it. 00:38:58.084 [2024-06-10 14:07:12.548690] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.084 [2024-06-10 14:07:12.548804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.084 [2024-06-10 14:07:12.548825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.084 [2024-06-10 14:07:12.548835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.084 [2024-06-10 14:07:12.548844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.084 [2024-06-10 14:07:12.548864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.084 qpair failed and we were unable to recover it. 00:38:58.342 [2024-06-10 14:07:12.558730] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.342 [2024-06-10 14:07:12.558863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.342 [2024-06-10 14:07:12.558884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.342 [2024-06-10 14:07:12.558894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.342 [2024-06-10 14:07:12.558903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.342 [2024-06-10 14:07:12.558922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.342 qpair failed and we were unable to recover it. 00:38:58.342 [2024-06-10 14:07:12.568694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.342 [2024-06-10 14:07:12.568786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.342 [2024-06-10 14:07:12.568804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.342 [2024-06-10 14:07:12.568814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.342 [2024-06-10 14:07:12.568823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.342 [2024-06-10 14:07:12.568842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.342 qpair failed and we were unable to recover it. 00:38:58.342 [2024-06-10 14:07:12.578822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.342 [2024-06-10 14:07:12.578916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.342 [2024-06-10 14:07:12.578935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.342 [2024-06-10 14:07:12.578945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.342 [2024-06-10 14:07:12.578954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.578972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.588834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.588932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.588950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.588960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.588969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.588986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.598878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.598968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.598985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.598995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.599004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.599022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.608866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.608998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.609016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.609026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.609034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.609053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.618859] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.618990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.619009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.619021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.619034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.619053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.628866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.628998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.629016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.629026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.629034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.629053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.638873] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.638971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.638988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.638998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.639007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.639025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.648990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.649083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.649101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.649110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.649119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.649137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.658972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.659106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.659124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.659134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.659142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.659160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.669052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.669159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.669177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.669187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.669196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.669214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.679049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.679133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.679151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.679161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.679170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.679188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.689037] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.689123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.689140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.689150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.689159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.689176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.699160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.699248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.699266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.699275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.699284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.699302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.709165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.343 [2024-06-10 14:07:12.709252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.343 [2024-06-10 14:07:12.709269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.343 [2024-06-10 14:07:12.709282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.343 [2024-06-10 14:07:12.709291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.343 [2024-06-10 14:07:12.709309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.343 qpair failed and we were unable to recover it. 00:38:58.343 [2024-06-10 14:07:12.719183] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.719292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.719309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.719319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.719328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.719346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.729228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.729312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.729330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.729340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.729349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.729367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.739241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.739331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.739349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.739359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.739368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.739385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.749204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.749288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.749306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.749316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.749325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.749344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.759230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.759317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.759335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.759345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.759354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.759372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.769292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.769383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.769401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.769410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.769419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.769437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.779285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.779369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.779387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.779397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.779406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.779424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.789345] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.789431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.789449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.789458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.789467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.789485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.799401] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.799515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.799533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.799546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.799555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.799573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.344 [2024-06-10 14:07:12.809484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.344 [2024-06-10 14:07:12.809609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.344 [2024-06-10 14:07:12.809630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.344 [2024-06-10 14:07:12.809640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.344 [2024-06-10 14:07:12.809649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.344 [2024-06-10 14:07:12.809669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.344 qpair failed and we were unable to recover it. 00:38:58.602 [2024-06-10 14:07:12.819434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.819585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.819607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.819617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.819625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.819645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.829436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.829542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.829560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.829570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.829583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.829602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.839550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.839638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.839657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.839667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.839676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.839694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.849569] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.849669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.849688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.849698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.849706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.849725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.859519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.859619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.859637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.859647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.859656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.859675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.869559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.869651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.869669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.869678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.869687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.869705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.879636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.879722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.879740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.879750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.879758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.879776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.889625] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.889715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.889735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.889745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.889754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.889772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.899683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.899771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.899789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.899799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.899808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.899826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.909658] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.909747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.909765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.909774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.909783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.909801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.919915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.920002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.920020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.920030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.920039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.920057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.929785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.929902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.929920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.929930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.929938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.929960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.603 [2024-06-10 14:07:12.939851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.603 [2024-06-10 14:07:12.939954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.603 [2024-06-10 14:07:12.939972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.603 [2024-06-10 14:07:12.939982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.603 [2024-06-10 14:07:12.939991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.603 [2024-06-10 14:07:12.940010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.603 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:12.949868] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:12.949970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:12.949988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:12.949999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:12.950007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:12.950025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:12.959927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:12.960020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:12.960038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:12.960048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:12.960057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:12.960075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:12.969859] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:12.969985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:12.970003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:12.970013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:12.970022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:12.970040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:12.979934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:12.980023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:12.980044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:12.980054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:12.980062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:12.980081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:12.989970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:12.990057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:12.990075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:12.990085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:12.990093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:12.990112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.000001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.000095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.000113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.000123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.000132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.000150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.009958] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.010050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.010068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.010077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.010086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.010104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.019999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.020091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.020109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.020119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.020131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.020149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.030015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.030105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.030123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.030133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.030141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.030159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.040159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.040244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.040262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.040272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.040280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.040299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.050074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.050166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.050184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.050194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.050202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.050221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.060204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.060307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.060324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.060333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.060342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.060361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.604 [2024-06-10 14:07:13.070215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.604 [2024-06-10 14:07:13.070320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.604 [2024-06-10 14:07:13.070341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.604 [2024-06-10 14:07:13.070351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.604 [2024-06-10 14:07:13.070360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.604 [2024-06-10 14:07:13.070379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.604 qpair failed and we were unable to recover it. 00:38:58.863 [2024-06-10 14:07:13.080227] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.863 [2024-06-10 14:07:13.080327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.863 [2024-06-10 14:07:13.080348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.863 [2024-06-10 14:07:13.080359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.863 [2024-06-10 14:07:13.080368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.863 [2024-06-10 14:07:13.080387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.863 qpair failed and we were unable to recover it. 00:38:58.863 [2024-06-10 14:07:13.090212] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.863 [2024-06-10 14:07:13.090340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.863 [2024-06-10 14:07:13.090359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.863 [2024-06-10 14:07:13.090369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.863 [2024-06-10 14:07:13.090378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.863 [2024-06-10 14:07:13.090396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.863 qpair failed and we were unable to recover it. 00:38:58.863 [2024-06-10 14:07:13.100213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.863 [2024-06-10 14:07:13.100304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.863 [2024-06-10 14:07:13.100323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.863 [2024-06-10 14:07:13.100333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.863 [2024-06-10 14:07:13.100341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.863 [2024-06-10 14:07:13.100360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.863 qpair failed and we were unable to recover it. 00:38:58.863 [2024-06-10 14:07:13.110299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.863 [2024-06-10 14:07:13.110387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.863 [2024-06-10 14:07:13.110405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.863 [2024-06-10 14:07:13.110418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.863 [2024-06-10 14:07:13.110427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.863 [2024-06-10 14:07:13.110446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.863 qpair failed and we were unable to recover it. 00:38:58.863 [2024-06-10 14:07:13.120269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.863 [2024-06-10 14:07:13.120398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.863 [2024-06-10 14:07:13.120415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.863 [2024-06-10 14:07:13.120425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.863 [2024-06-10 14:07:13.120434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.863 [2024-06-10 14:07:13.120452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.863 qpair failed and we were unable to recover it. 00:38:58.863 [2024-06-10 14:07:13.130383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.863 [2024-06-10 14:07:13.130497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.863 [2024-06-10 14:07:13.130514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.863 [2024-06-10 14:07:13.130524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.863 [2024-06-10 14:07:13.130533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.130551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.140430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.140521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.140539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.140549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.140558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.140581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.150416] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.150500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.150519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.150529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.150537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.150555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.160371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.160455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.160473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.160483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.160492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.160510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.170506] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.170606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.170624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.170634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.170643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.170660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.180479] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.180567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.180590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.180601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.180609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.180627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.190514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.190614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.190631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.190641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.190650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.190668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.200539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.200663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.200680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.200695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.200704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.200722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.210662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.210754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.210772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.210782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.210791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.210811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.220539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.220707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.220724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.220734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.220742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.220761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.230656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.230743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.230760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.230770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.230779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.230796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.240681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.240770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.240787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.240797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.240806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.240824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.250724] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.250810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.250828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.250838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.250847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.250865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.864 qpair failed and we were unable to recover it. 00:38:58.864 [2024-06-10 14:07:13.260746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.864 [2024-06-10 14:07:13.260835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.864 [2024-06-10 14:07:13.260852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.864 [2024-06-10 14:07:13.260863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.864 [2024-06-10 14:07:13.260871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.864 [2024-06-10 14:07:13.260889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.270773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.270876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.270893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.270902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.270911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.270929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.280805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.280890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.280907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.280917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.280926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.280944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.290774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.290896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.290917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.290926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.290935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.290953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.300857] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.300942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.300959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.300969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.300978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.300996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.310895] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.310982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.310999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.311008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.311017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.311035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.320907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.320992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.321009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.321019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.321027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.321046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:58.865 [2024-06-10 14:07:13.330937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.865 [2024-06-10 14:07:13.331031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.865 [2024-06-10 14:07:13.331051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.865 [2024-06-10 14:07:13.331061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.865 [2024-06-10 14:07:13.331070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:58.865 [2024-06-10 14:07:13.331092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:58.865 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.340964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.341062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.341083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.341093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.341102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.341121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.351004] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.351097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.351115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.351125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.351134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.351153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.361028] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.361115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.361133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.361143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.361151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.361169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.371095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.371185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.371203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.371213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.371221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.371239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.381071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.381156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.381177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.381187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.381195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.381214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.391022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.391125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.391142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.391151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.391160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.391177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.401085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.401170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.401187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.401197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.401205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.401223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.411149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.411262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.411280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.411289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.411298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.411317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.421181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.421269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.421286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.421296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.421307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.421325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.431248] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.431349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.431366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.431375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.431384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.431402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.441294] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.441393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.441411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.441420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.441429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.441447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.124 qpair failed and we were unable to recover it. 00:38:59.124 [2024-06-10 14:07:13.451267] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.124 [2024-06-10 14:07:13.451355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.124 [2024-06-10 14:07:13.451372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.124 [2024-06-10 14:07:13.451382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.124 [2024-06-10 14:07:13.451390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.124 [2024-06-10 14:07:13.451408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.461317] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.461419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.461437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.461446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.461454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.461473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.471390] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.471492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.471509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.471519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.471527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.471545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.481369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.481455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.481473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.481484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.481492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.481510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.491351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.491436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.491454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.491464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.491473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.491491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.501407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.501500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.501517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.501527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.501536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.501554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.511473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.511561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.511586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.511597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.511609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.511627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.521476] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.521561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.521584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.521594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.521603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.521621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.531456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.531551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.531568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.531583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.531591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.531610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.541545] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.541638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.541655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.541665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.541673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.541691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.551641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.551736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.551753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.551763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.551772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.551790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.561540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.561634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.561652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.561662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.561671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.561689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.571641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.571735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.571753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.571762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.571771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.571790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.581669] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.581759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.125 [2024-06-10 14:07:13.581776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.125 [2024-06-10 14:07:13.581786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.125 [2024-06-10 14:07:13.581795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.125 [2024-06-10 14:07:13.581813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.125 qpair failed and we were unable to recover it. 00:38:59.125 [2024-06-10 14:07:13.591630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.125 [2024-06-10 14:07:13.591725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.126 [2024-06-10 14:07:13.591746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.126 [2024-06-10 14:07:13.591756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.126 [2024-06-10 14:07:13.591765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.126 [2024-06-10 14:07:13.591784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.126 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.601739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.601834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.601855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.601869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.601878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.601899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.611755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.611869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.611888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.611898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.611906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.611925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.621797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.621886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.621904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.621914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.621923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.621941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.631836] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.631932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.631950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.631960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.631968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.631987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.641864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.641957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.641974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.641984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.641992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.642010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.651875] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.651964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.651981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.651991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.652000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.652018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.661905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.384 [2024-06-10 14:07:13.661996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.384 [2024-06-10 14:07:13.662014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.384 [2024-06-10 14:07:13.662024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.384 [2024-06-10 14:07:13.662033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.384 [2024-06-10 14:07:13.662050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.384 qpair failed and we were unable to recover it. 00:38:59.384 [2024-06-10 14:07:13.671931] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.672013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.672030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.672040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.672049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.672066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.681954] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.682043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.682060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.682070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.682078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.682096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.692012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.692112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.692132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.692142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.692151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.692169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.702024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.702115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.702132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.702142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.702150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.702168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.712018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.712188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.712205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.712215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.712223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.712242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.722175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.722268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.722285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.722295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.722303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.722321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.732040] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.732128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.732146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.732156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.732164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.732186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.742152] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.742254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.742271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.742281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.742289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.742308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.752184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.752270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.752288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.752298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.752306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.752324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.762249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.762342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.762360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.762369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.762378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.762396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.772171] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.772262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.772279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.772289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.772298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.772316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.782239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.782331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.782352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.782361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.782369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.782387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.385 [2024-06-10 14:07:13.792280] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.385 [2024-06-10 14:07:13.792411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.385 [2024-06-10 14:07:13.792428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.385 [2024-06-10 14:07:13.792438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.385 [2024-06-10 14:07:13.792446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.385 [2024-06-10 14:07:13.792464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.385 qpair failed and we were unable to recover it. 00:38:59.386 [2024-06-10 14:07:13.802323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.386 [2024-06-10 14:07:13.802411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.386 [2024-06-10 14:07:13.802428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.386 [2024-06-10 14:07:13.802438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.386 [2024-06-10 14:07:13.802446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.386 [2024-06-10 14:07:13.802464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.386 qpair failed and we were unable to recover it. 00:38:59.386 [2024-06-10 14:07:13.812346] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.386 [2024-06-10 14:07:13.812440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.386 [2024-06-10 14:07:13.812458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.386 [2024-06-10 14:07:13.812468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.386 [2024-06-10 14:07:13.812476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.386 [2024-06-10 14:07:13.812494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.386 qpair failed and we were unable to recover it. 00:38:59.386 [2024-06-10 14:07:13.822341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.386 [2024-06-10 14:07:13.822458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.386 [2024-06-10 14:07:13.822475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.386 [2024-06-10 14:07:13.822486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.386 [2024-06-10 14:07:13.822497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.386 [2024-06-10 14:07:13.822515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.386 qpair failed and we were unable to recover it. 00:38:59.386 [2024-06-10 14:07:13.832432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.386 [2024-06-10 14:07:13.832535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.386 [2024-06-10 14:07:13.832552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.386 [2024-06-10 14:07:13.832562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.386 [2024-06-10 14:07:13.832570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.386 [2024-06-10 14:07:13.832593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.386 qpair failed and we were unable to recover it. 00:38:59.386 [2024-06-10 14:07:13.842408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.386 [2024-06-10 14:07:13.842496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.386 [2024-06-10 14:07:13.842512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.386 [2024-06-10 14:07:13.842522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.386 [2024-06-10 14:07:13.842531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.386 [2024-06-10 14:07:13.842549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.386 qpair failed and we were unable to recover it. 00:38:59.386 [2024-06-10 14:07:13.852488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.386 [2024-06-10 14:07:13.852601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.386 [2024-06-10 14:07:13.852622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.386 [2024-06-10 14:07:13.852633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.386 [2024-06-10 14:07:13.852641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.386 [2024-06-10 14:07:13.852661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.386 qpair failed and we were unable to recover it. 00:38:59.644 [2024-06-10 14:07:13.862493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.644 [2024-06-10 14:07:13.862599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.644 [2024-06-10 14:07:13.862621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.644 [2024-06-10 14:07:13.862632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.644 [2024-06-10 14:07:13.862641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.644 [2024-06-10 14:07:13.862660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.644 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.872521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.872615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.872634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.872644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.872653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.872671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.882536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.882629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.882647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.882657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.882666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.882684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.892569] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.892666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.892684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.892694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.892703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.892721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.902647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.902742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.902760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.902770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.902778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.902797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.912611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.912704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.912721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.912732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.912744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.912761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.922672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.922762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.922780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.922790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.922798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.922817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.932681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.932772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.932789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.932799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.932808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.932826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.942675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.942767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.942785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.942795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.942803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.942821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.952756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.952849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.952866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.952876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.952885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.952903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.962759] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.962849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.962866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.962876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.962884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.962903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.972813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.972902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.972919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.972929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.972937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.972955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.982821] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.982911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.982929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.982939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.982948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.982965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.645 [2024-06-10 14:07:13.992872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.645 [2024-06-10 14:07:13.992953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.645 [2024-06-10 14:07:13.992971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.645 [2024-06-10 14:07:13.992981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.645 [2024-06-10 14:07:13.992990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.645 [2024-06-10 14:07:13.993008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.645 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.002906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.002988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.003005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.003018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.003027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.003045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.012913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.013004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.013021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.013031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.013039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.013057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.022997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.023085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.023103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.023112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.023121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.023139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.032990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.033075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.033092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.033101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.033110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.033128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.043008] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.043095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.043112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.043122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.043130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.043148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.053019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.053103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.053121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.053131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.053139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.053157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.063091] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.063187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.063204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.063214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.063222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.063240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.073099] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.073203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.073220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.073230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.073239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7864000b90 00:38:59.646 [2024-06-10 14:07:14.073257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.083135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.083310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.083344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.083362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.083378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7860000b90 00:38:59.646 [2024-06-10 14:07:14.083409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.093285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.093475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.093561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.093612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.093642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2088fc0 00:38:59.646 [2024-06-10 14:07:14.093703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.103214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.103413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.103452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.103474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.103495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2088fc0 00:38:59.646 [2024-06-10 14:07:14.103531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.646 [2024-06-10 14:07:14.113230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.646 [2024-06-10 14:07:14.113357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.646 [2024-06-10 14:07:14.113385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.646 [2024-06-10 14:07:14.113401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.646 [2024-06-10 14:07:14.113415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7860000b90 00:38:59.646 [2024-06-10 14:07:14.113445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:59.646 qpair failed and we were unable to recover it. 00:38:59.905 [2024-06-10 14:07:14.123364] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.905 [2024-06-10 14:07:14.123590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.905 [2024-06-10 14:07:14.123657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.905 [2024-06-10 14:07:14.123694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.905 [2024-06-10 14:07:14.123724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7858000b90 00:38:59.905 [2024-06-10 14:07:14.123786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:59.905 qpair failed and we were unable to recover it. 00:38:59.905 [2024-06-10 14:07:14.133331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.905 [2024-06-10 14:07:14.133485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.905 [2024-06-10 14:07:14.133521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.905 [2024-06-10 14:07:14.133543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.905 [2024-06-10 14:07:14.133563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7858000b90 00:38:59.905 [2024-06-10 14:07:14.133617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:59.905 qpair failed and we were unable to recover it. 00:38:59.905 [2024-06-10 14:07:14.133950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2096b50 is same with the state(5) to be set 00:38:59.905 [2024-06-10 14:07:14.134147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2096b50 (9): Bad file descriptor 00:38:59.905 Initializing NVMe Controllers 00:38:59.905 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:59.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:59.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:59.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:59.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:59.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:59.905 Initialization complete. Launching workers. 00:38:59.905 Starting thread on core 1 00:38:59.905 Starting thread on core 2 00:38:59.905 Starting thread on core 3 00:38:59.905 Starting thread on core 0 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:59.905 00:38:59.905 real 0m11.420s 00:38:59.905 user 0m20.876s 00:38:59.905 sys 0m4.909s 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:59.905 ************************************ 00:38:59.905 END TEST nvmf_target_disconnect_tc2 00:38:59.905 ************************************ 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:59.905 rmmod nvme_tcp 00:38:59.905 rmmod nvme_fabrics 00:38:59.905 rmmod nvme_keyring 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1650294 ']' 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1650294 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1650294 ']' 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1650294 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1650294 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1650294' 00:38:59.905 killing process with pid 1650294 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1650294 00:38:59.905 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1650294 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:00.163 14:07:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.698 14:07:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:02.698 00:39:02.698 real 0m22.332s 00:39:02.698 user 0m48.377s 00:39:02.698 sys 0m11.603s 00:39:02.698 14:07:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:02.698 14:07:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:02.698 ************************************ 00:39:02.698 END TEST nvmf_target_disconnect 00:39:02.698 ************************************ 00:39:02.698 14:07:16 nvmf_tcp -- nvmf/nvmf.sh@127 -- # timing_exit host 00:39:02.698 14:07:16 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:02.698 14:07:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:02.698 14:07:16 nvmf_tcp -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:39:02.698 00:39:02.698 real 26m25.180s 00:39:02.698 user 51m3.629s 00:39:02.698 sys 10m0.712s 00:39:02.698 14:07:16 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:02.698 14:07:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:02.698 ************************************ 00:39:02.698 END TEST nvmf_tcp 00:39:02.698 ************************************ 00:39:02.698 14:07:16 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:39:02.698 14:07:16 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:02.698 14:07:16 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:02.698 14:07:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:02.698 14:07:16 -- common/autotest_common.sh@10 -- # set +x 00:39:02.698 ************************************ 00:39:02.698 START TEST spdkcli_nvmf_tcp 00:39:02.698 ************************************ 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:02.698 * Looking for test storage... 00:39:02.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:02.698 14:07:16 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1651969 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1651969 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1651969 ']' 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:02.699 14:07:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:02.699 [2024-06-10 14:07:17.019465] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:39:02.699 [2024-06-10 14:07:17.019532] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651969 ] 00:39:02.699 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.699 [2024-06-10 14:07:17.141559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:02.958 [2024-06-10 14:07:17.228672] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.958 [2024-06-10 14:07:17.228678] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:03.526 14:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:03.526 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:03.526 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:03.526 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:03.526 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:03.526 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:03.526 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:03.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:03.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:03.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:03.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:03.526 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:03.526 ' 00:39:06.061 [2024-06-10 14:07:20.361162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.475 [2024-06-10 14:07:21.629624] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:10.005 [2024-06-10 14:07:24.009170] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:11.904 [2024-06-10 14:07:26.075796] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:13.278 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:13.278 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:13.278 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:13.278 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:13.278 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:13.278 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:13.278 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:13.278 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:13.278 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:13.278 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:13.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:13.279 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:13.279 14:07:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:13.279 14:07:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:13.279 14:07:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:13.536 14:07:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:13.536 14:07:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:13.536 14:07:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:13.536 14:07:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:13.537 14:07:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:13.794 14:07:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:13.794 14:07:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:13.794 14:07:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:13.794 14:07:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:13.794 14:07:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:14.052 14:07:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:14.052 14:07:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:14.052 14:07:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:14.052 14:07:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:14.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:14.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:14.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:14.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:14.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:14.052 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:14.052 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:14.052 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:14.052 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:14.052 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:14.052 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:14.052 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:14.052 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:14.052 ' 00:39:19.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:19.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:19.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:19.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:19.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:19.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:19.317 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:19.317 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:19.317 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:19.317 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:19.317 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:19.317 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:19.317 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:19.317 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1651969 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1651969 ']' 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1651969 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1651969 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1651969' 00:39:19.317 killing process with pid 1651969 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1651969 00:39:19.317 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1651969 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1651969 ']' 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1651969 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1651969 ']' 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1651969 00:39:19.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1651969) - No such process 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1651969 is not found' 00:39:19.575 Process with pid 1651969 is not found 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:19.575 00:39:19.575 real 0m17.033s 00:39:19.575 user 0m36.282s 00:39:19.575 sys 0m1.147s 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:19.575 14:07:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:19.575 ************************************ 00:39:19.575 END TEST spdkcli_nvmf_tcp 00:39:19.575 ************************************ 00:39:19.575 14:07:33 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:19.575 14:07:33 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:19.575 14:07:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:19.575 14:07:33 -- common/autotest_common.sh@10 -- # set +x 00:39:19.575 ************************************ 00:39:19.575 START TEST nvmf_identify_passthru 00:39:19.575 ************************************ 00:39:19.575 14:07:33 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:19.575 * Looking for test storage... 00:39:19.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:19.575 14:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:19.833 14:07:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:19.833 14:07:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:19.833 14:07:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:19.833 14:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:19.833 14:07:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:19.833 14:07:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:19.833 14:07:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:19.833 14:07:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:19.833 14:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:19.833 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:19.834 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:19.834 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:19.834 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.834 14:07:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:19.834 14:07:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:19.834 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:19.834 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:19.834 14:07:34 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:39:19.834 14:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:29.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:29.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:29.804 Found net devices under 0000:af:00.0: cvl_0_0 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:29.804 Found net devices under 0000:af:00.1: cvl_0_1 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.804 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:29.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:39:29.805 00:39:29.805 --- 10.0.0.2 ping statistics --- 00:39:29.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.805 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:39:29.805 00:39:29.805 --- 10.0.0.1 ping statistics --- 00:39:29.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.805 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:29.805 14:07:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:39:29.805 14:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:d8:00.0 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:29.805 14:07:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:29.805 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.990 14:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN036005WL1P6AGN 00:39:33.990 14:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:39:33.990 14:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:33.990 14:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:33.990 EAL: No free 2048 kB hugepages reported on node 1 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1660459 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1660459 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1660459 ']' 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.176 14:07:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:38.176 14:07:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:38.435 [2024-06-10 14:07:52.691230] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:39:38.435 [2024-06-10 14:07:52.691294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:38.435 EAL: No free 2048 kB hugepages reported on node 1 00:39:38.435 [2024-06-10 14:07:52.818883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:38.435 [2024-06-10 14:07:52.904719] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:38.435 [2024-06-10 14:07:52.904765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:38.435 [2024-06-10 14:07:52.904778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:38.435 [2024-06-10 14:07:52.904790] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:38.435 [2024-06-10 14:07:52.904801] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:38.435 [2024-06-10 14:07:52.904896] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.435 [2024-06-10 14:07:52.904920] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:39:38.435 [2024-06-10 14:07:52.905030] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.435 [2024-06-10 14:07:52.905030] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:39:39.368 14:07:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.368 INFO: Log level set to 20 00:39:39.368 INFO: Requests: 00:39:39.368 { 00:39:39.368 "jsonrpc": "2.0", 00:39:39.368 "method": "nvmf_set_config", 00:39:39.368 "id": 1, 00:39:39.368 "params": { 00:39:39.368 "admin_cmd_passthru": { 00:39:39.368 "identify_ctrlr": true 00:39:39.368 } 00:39:39.368 } 00:39:39.368 } 00:39:39.368 00:39:39.368 INFO: response: 00:39:39.368 { 00:39:39.368 "jsonrpc": "2.0", 00:39:39.368 "id": 1, 00:39:39.368 "result": true 00:39:39.368 } 00:39:39.368 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.368 14:07:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.368 INFO: Setting log level to 20 00:39:39.368 INFO: Setting log level to 20 00:39:39.368 INFO: Log level set to 20 00:39:39.368 INFO: Log level set to 20 00:39:39.368 INFO: Requests: 00:39:39.368 { 00:39:39.368 "jsonrpc": "2.0", 00:39:39.368 "method": "framework_start_init", 00:39:39.368 "id": 1 00:39:39.368 } 00:39:39.368 00:39:39.368 INFO: Requests: 00:39:39.368 { 00:39:39.368 "jsonrpc": "2.0", 00:39:39.368 "method": "framework_start_init", 00:39:39.368 "id": 1 00:39:39.368 } 00:39:39.368 00:39:39.368 [2024-06-10 14:07:53.698089] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:39.368 INFO: response: 00:39:39.368 { 00:39:39.368 "jsonrpc": "2.0", 00:39:39.368 "id": 1, 00:39:39.368 "result": true 00:39:39.368 } 00:39:39.368 00:39:39.368 INFO: response: 00:39:39.368 { 00:39:39.368 "jsonrpc": "2.0", 00:39:39.368 "id": 1, 00:39:39.368 "result": true 00:39:39.368 } 00:39:39.368 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.368 14:07:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.368 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.368 INFO: Setting log level to 40 00:39:39.368 INFO: Setting log level to 40 00:39:39.368 INFO: Setting log level to 40 00:39:39.369 [2024-06-10 14:07:53.711559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:39.369 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.369 14:07:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:39.369 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:39.369 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.369 14:07:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:39:39.369 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.369 14:07:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:42.650 Nvme0n1 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:42.650 [2024-06-10 14:07:56.660368] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:42.650 [ 00:39:42.650 { 00:39:42.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:42.650 "subtype": "Discovery", 00:39:42.650 "listen_addresses": [], 00:39:42.650 "allow_any_host": true, 00:39:42.650 "hosts": [] 00:39:42.650 }, 00:39:42.650 { 00:39:42.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:42.650 "subtype": "NVMe", 00:39:42.650 "listen_addresses": [ 00:39:42.650 { 00:39:42.650 "trtype": "TCP", 00:39:42.650 "adrfam": "IPv4", 00:39:42.650 "traddr": "10.0.0.2", 00:39:42.650 "trsvcid": "4420" 00:39:42.650 } 00:39:42.650 ], 00:39:42.650 "allow_any_host": true, 00:39:42.650 "hosts": [], 00:39:42.650 "serial_number": "SPDK00000000000001", 00:39:42.650 "model_number": "SPDK bdev Controller", 00:39:42.650 "max_namespaces": 1, 00:39:42.650 "min_cntlid": 1, 00:39:42.650 "max_cntlid": 65519, 00:39:42.650 "namespaces": [ 00:39:42.650 { 00:39:42.650 "nsid": 1, 00:39:42.650 "bdev_name": "Nvme0n1", 00:39:42.650 "name": "Nvme0n1", 00:39:42.650 "nguid": "411B9961AA6F4225B14D275D1A101C0D", 00:39:42.650 "uuid": "411b9961-aa6f-4225-b14d-275d1a101c0d" 00:39:42.650 } 00:39:42.650 ] 00:39:42.650 } 00:39:42.650 ] 00:39:42.650 14:07:56 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:42.650 EAL: No free 2048 kB hugepages reported on node 1 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN036005WL1P6AGN 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:42.650 14:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:42.650 EAL: No free 2048 kB hugepages reported on node 1 00:39:42.650 14:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:39:42.650 14:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN036005WL1P6AGN '!=' PHLN036005WL1P6AGN ']' 00:39:42.650 14:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:39:42.650 14:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:42.650 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.650 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:42.650 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.650 14:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:42.650 14:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:42.650 rmmod nvme_tcp 00:39:42.650 rmmod nvme_fabrics 00:39:42.650 rmmod nvme_keyring 00:39:42.650 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:42.907 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:39:42.907 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:39:42.907 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1660459 ']' 00:39:42.907 14:07:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1660459 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1660459 ']' 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1660459 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1660459 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1660459' 00:39:42.907 killing process with pid 1660459 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1660459 00:39:42.907 14:07:57 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1660459 00:39:44.804 14:07:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:44.804 14:07:59 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:44.804 14:07:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:44.804 14:07:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:44.804 14:07:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:45.061 14:07:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:45.061 14:07:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:45.061 14:07:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.965 14:08:01 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:46.965 00:39:46.965 real 0m27.416s 00:39:46.965 user 0m34.430s 00:39:46.965 sys 0m8.351s 00:39:46.965 14:08:01 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:46.965 14:08:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:46.965 ************************************ 00:39:46.965 END TEST nvmf_identify_passthru 00:39:46.965 ************************************ 00:39:46.965 14:08:01 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:46.965 14:08:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:46.965 14:08:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:46.965 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:39:47.224 ************************************ 00:39:47.224 START TEST nvmf_dif 00:39:47.224 ************************************ 00:39:47.224 14:08:01 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:47.224 * Looking for test storage... 00:39:47.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:47.224 14:08:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:47.224 14:08:01 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:47.224 14:08:01 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:47.224 14:08:01 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:47.224 14:08:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.224 14:08:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.224 14:08:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.224 14:08:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:47.224 14:08:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:47.224 14:08:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:47.224 14:08:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:47.224 14:08:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:47.224 14:08:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:47.224 14:08:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.224 14:08:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:47.224 14:08:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:47.224 14:08:01 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:39:47.224 14:08:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:57.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:57.230 14:08:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:57.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:57.231 Found net devices under 0000:af:00.0: cvl_0_0 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:57.231 Found net devices under 0000:af:00.1: cvl_0_1 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:57.231 14:08:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:57.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:57.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:39:57.231 00:39:57.231 --- 10.0.0.2 ping statistics --- 00:39:57.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.231 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:57.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:57.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:39:57.231 00:39:57.231 --- 10.0.0.1 ping statistics --- 00:39:57.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.231 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:57.231 14:08:10 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:59.783 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:39:59.783 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:00.041 14:08:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:00.041 14:08:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1667429 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:00.041 14:08:14 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1667429 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1667429 ']' 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:00.041 14:08:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:00.041 [2024-06-10 14:08:14.383637] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:40:00.041 [2024-06-10 14:08:14.383698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:00.041 EAL: No free 2048 kB hugepages reported on node 1 00:40:00.041 [2024-06-10 14:08:14.509122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.299 [2024-06-10 14:08:14.592884] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:00.299 [2024-06-10 14:08:14.592929] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:00.299 [2024-06-10 14:08:14.592943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:00.299 [2024-06-10 14:08:14.592955] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:00.299 [2024-06-10 14:08:14.592965] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:00.299 [2024-06-10 14:08:14.592996] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.867 14:08:15 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:00.867 14:08:15 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:40:00.868 14:08:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:00.868 14:08:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:00.868 14:08:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:00.868 14:08:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:00.868 [2024-06-10 14:08:15.329165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:00.868 14:08:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:00.868 14:08:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:01.127 ************************************ 00:40:01.127 START TEST fio_dif_1_default 00:40:01.127 ************************************ 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:01.127 bdev_null0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:01.127 [2024-06-10 14:08:15.409528] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:01.127 { 00:40:01.127 "params": { 00:40:01.127 "name": "Nvme$subsystem", 00:40:01.127 "trtype": "$TEST_TRANSPORT", 00:40:01.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:01.127 "adrfam": "ipv4", 00:40:01.127 "trsvcid": "$NVMF_PORT", 00:40:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:01.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:01.127 "hdgst": ${hdgst:-false}, 00:40:01.127 "ddgst": ${ddgst:-false} 00:40:01.127 }, 00:40:01.127 "method": "bdev_nvme_attach_controller" 00:40:01.127 } 00:40:01.127 EOF 00:40:01.127 )") 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:01.127 "params": { 00:40:01.127 "name": "Nvme0", 00:40:01.127 "trtype": "tcp", 00:40:01.127 "traddr": "10.0.0.2", 00:40:01.127 "adrfam": "ipv4", 00:40:01.127 "trsvcid": "4420", 00:40:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:01.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:01.127 "hdgst": false, 00:40:01.127 "ddgst": false 00:40:01.127 }, 00:40:01.127 "method": "bdev_nvme_attach_controller" 00:40:01.127 }' 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:01.127 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:40:01.128 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:01.128 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:01.128 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:01.128 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:01.128 14:08:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:01.387 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:01.387 fio-3.35 00:40:01.387 Starting 1 thread 00:40:01.646 EAL: No free 2048 kB hugepages reported on node 1 00:40:13.857 00:40:13.857 filename0: (groupid=0, jobs=1): err= 0: pid=1667867: Mon Jun 10 14:08:26 2024 00:40:13.857 read: IOPS=186, BW=747KiB/s (765kB/s)(7472KiB/10004msec) 00:40:13.857 slat (nsec): min=8109, max=94104, avg=8428.63, stdev=2098.97 00:40:13.857 clat (usec): min=891, max=43035, avg=21398.18, stdev=20432.19 00:40:13.857 lat (usec): min=899, max=43044, avg=21406.61, stdev=20432.10 00:40:13.857 clat percentiles (usec): 00:40:13.857 | 1.00th=[ 906], 5.00th=[ 914], 10.00th=[ 922], 20.00th=[ 930], 00:40:13.857 | 30.00th=[ 930], 40.00th=[ 938], 50.00th=[41157], 60.00th=[41157], 00:40:13.857 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:13.857 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:13.857 | 99.99th=[43254] 00:40:13.857 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=747.79, stdev=30.56, samples=19 00:40:13.857 iops : min= 176, max= 192, avg=186.95, stdev= 7.64, samples=19 00:40:13.857 lat (usec) : 1000=49.89% 00:40:13.857 lat (msec) : 50=50.11% 00:40:13.857 cpu : usr=86.21%, sys=13.47%, ctx=13, majf=0, minf=199 00:40:13.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.857 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:13.857 00:40:13.857 Run status group 0 (all jobs): 00:40:13.857 READ: bw=747KiB/s (765kB/s), 747KiB/s-747KiB/s (765kB/s-765kB/s), io=7472KiB (7651kB), run=10004-10004msec 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 00:40:13.857 real 0m11.389s 00:40:13.857 user 0m20.748s 00:40:13.857 sys 0m1.721s 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 ************************************ 00:40:13.857 END TEST fio_dif_1_default 00:40:13.857 ************************************ 00:40:13.857 14:08:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:13.857 14:08:26 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:13.857 14:08:26 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 ************************************ 00:40:13.857 START TEST fio_dif_1_multi_subsystems 00:40:13.857 ************************************ 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 bdev_null0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 [2024-06-10 14:08:26.888788] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.857 bdev_null1 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:13.857 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:13.858 { 00:40:13.858 "params": { 00:40:13.858 "name": "Nvme$subsystem", 00:40:13.858 "trtype": "$TEST_TRANSPORT", 00:40:13.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.858 "adrfam": "ipv4", 00:40:13.858 "trsvcid": "$NVMF_PORT", 00:40:13.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.858 "hdgst": ${hdgst:-false}, 00:40:13.858 "ddgst": ${ddgst:-false} 00:40:13.858 }, 00:40:13.858 "method": "bdev_nvme_attach_controller" 00:40:13.858 } 00:40:13.858 EOF 00:40:13.858 )") 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:13.858 { 00:40:13.858 "params": { 00:40:13.858 "name": "Nvme$subsystem", 00:40:13.858 "trtype": "$TEST_TRANSPORT", 00:40:13.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.858 "adrfam": "ipv4", 00:40:13.858 "trsvcid": "$NVMF_PORT", 00:40:13.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.858 "hdgst": ${hdgst:-false}, 00:40:13.858 "ddgst": ${ddgst:-false} 00:40:13.858 }, 00:40:13.858 "method": "bdev_nvme_attach_controller" 00:40:13.858 } 00:40:13.858 EOF 00:40:13.858 )") 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:13.858 "params": { 00:40:13.858 "name": "Nvme0", 00:40:13.858 "trtype": "tcp", 00:40:13.858 "traddr": "10.0.0.2", 00:40:13.858 "adrfam": "ipv4", 00:40:13.858 "trsvcid": "4420", 00:40:13.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:13.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:13.858 "hdgst": false, 00:40:13.858 "ddgst": false 00:40:13.858 }, 00:40:13.858 "method": "bdev_nvme_attach_controller" 00:40:13.858 },{ 00:40:13.858 "params": { 00:40:13.858 "name": "Nvme1", 00:40:13.858 "trtype": "tcp", 00:40:13.858 "traddr": "10.0.0.2", 00:40:13.858 "adrfam": "ipv4", 00:40:13.858 "trsvcid": "4420", 00:40:13.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:13.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:13.858 "hdgst": false, 00:40:13.858 "ddgst": false 00:40:13.858 }, 00:40:13.858 "method": "bdev_nvme_attach_controller" 00:40:13.858 }' 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:40:13.858 14:08:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:13.858 14:08:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:13.858 14:08:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:13.858 14:08:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:13.858 14:08:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:13.858 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:13.858 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:13.858 fio-3.35 00:40:13.858 Starting 2 threads 00:40:13.858 EAL: No free 2048 kB hugepages reported on node 1 00:40:23.825 00:40:23.825 filename0: (groupid=0, jobs=1): err= 0: pid=1669916: Mon Jun 10 14:08:38 2024 00:40:23.825 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10011msec) 00:40:23.825 slat (nsec): min=8261, max=32830, avg=9240.87, stdev=1984.03 00:40:23.825 clat (usec): min=825, max=43080, avg=21547.78, stdev=20507.80 00:40:23.825 lat (usec): min=834, max=43089, avg=21557.02, stdev=20507.16 00:40:23.825 clat percentiles (usec): 00:40:23.825 | 1.00th=[ 922], 5.00th=[ 930], 10.00th=[ 938], 20.00th=[ 947], 00:40:23.825 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41681], 00:40:23.825 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:23.825 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:23.825 | 99.99th=[43254] 00:40:23.825 bw ( KiB/s): min= 672, max= 768, per=49.90%, avg=740.80, stdev=34.86, samples=20 00:40:23.825 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:40:23.825 lat (usec) : 1000=43.97% 00:40:23.825 lat (msec) : 2=5.82%, 50=50.22% 00:40:23.825 cpu : usr=92.96%, sys=6.73%, ctx=13, majf=0, minf=145 00:40:23.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:23.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.825 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:23.825 filename1: (groupid=0, jobs=1): err= 0: pid=1669917: Mon Jun 10 14:08:38 2024 00:40:23.825 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10023msec) 00:40:23.825 slat (nsec): min=8274, max=25866, avg=9252.55, stdev=1909.79 00:40:23.825 clat (usec): min=938, max=43053, avg=21526.99, stdev=20516.24 00:40:23.825 lat (usec): min=946, max=43079, avg=21536.24, stdev=20515.61 00:40:23.825 clat percentiles (usec): 00:40:23.825 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[ 955], 20.00th=[ 963], 00:40:23.825 | 30.00th=[ 971], 40.00th=[ 979], 50.00th=[41157], 60.00th=[41681], 00:40:23.825 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:23.825 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:23.825 | 99.99th=[43254] 00:40:23.825 bw ( KiB/s): min= 704, max= 768, per=50.03%, avg=742.40, stdev=32.17, samples=20 00:40:23.825 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:40:23.825 lat (usec) : 1000=44.62% 00:40:23.825 lat (msec) : 2=5.27%, 50=50.11% 00:40:23.825 cpu : usr=92.67%, sys=7.02%, ctx=12, majf=0, minf=35 00:40:23.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:23.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:23.825 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:23.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:23.825 00:40:23.825 Run status group 0 (all jobs): 00:40:23.825 READ: bw=1483KiB/s (1519kB/s), 742KiB/s-742KiB/s (759kB/s-760kB/s), io=14.5MiB (15.2MB), run=10011-10023msec 00:40:23.825 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:23.825 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:23.825 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:23.825 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:23.825 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:23.826 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 00:40:24.084 real 0m11.464s 00:40:24.084 user 0m30.611s 00:40:24.084 sys 0m1.790s 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 ************************************ 00:40:24.084 END TEST fio_dif_1_multi_subsystems 00:40:24.084 ************************************ 00:40:24.084 14:08:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:24.084 14:08:38 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:24.084 14:08:38 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 ************************************ 00:40:24.084 START TEST fio_dif_rand_params 00:40:24.084 ************************************ 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 bdev_null0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:24.084 [2024-06-10 14:08:38.432430] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:24.084 { 00:40:24.084 "params": { 00:40:24.084 "name": "Nvme$subsystem", 00:40:24.084 "trtype": "$TEST_TRANSPORT", 00:40:24.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.084 "adrfam": "ipv4", 00:40:24.084 "trsvcid": "$NVMF_PORT", 00:40:24.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.084 "hdgst": ${hdgst:-false}, 00:40:24.084 "ddgst": ${ddgst:-false} 00:40:24.084 }, 00:40:24.084 "method": "bdev_nvme_attach_controller" 00:40:24.084 } 00:40:24.084 EOF 00:40:24.084 )") 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:24.084 "params": { 00:40:24.084 "name": "Nvme0", 00:40:24.084 "trtype": "tcp", 00:40:24.084 "traddr": "10.0.0.2", 00:40:24.084 "adrfam": "ipv4", 00:40:24.084 "trsvcid": "4420", 00:40:24.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:24.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:24.084 "hdgst": false, 00:40:24.084 "ddgst": false 00:40:24.084 }, 00:40:24.084 "method": "bdev_nvme_attach_controller" 00:40:24.084 }' 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.084 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:40:24.085 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:24.085 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:24.085 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:24.085 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:24.085 14:08:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.650 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:24.650 ... 00:40:24.650 fio-3.35 00:40:24.650 Starting 3 threads 00:40:24.650 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.209 00:40:31.209 filename0: (groupid=0, jobs=1): err= 0: pid=1671880: Mon Jun 10 14:08:44 2024 00:40:31.209 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(140MiB/5048msec) 00:40:31.209 slat (nsec): min=5986, max=70917, avg=9464.13, stdev=3271.54 00:40:31.209 clat (usec): min=5372, max=95322, avg=13453.68, stdev=12629.91 00:40:31.209 lat (usec): min=5378, max=95334, avg=13463.15, stdev=12630.02 00:40:31.209 clat percentiles (usec): 00:40:31.209 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 8160], 00:40:31.209 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:40:31.209 | 70.00th=[11076], 80.00th=[12256], 90.00th=[14353], 95.00th=[51119], 00:40:31.209 | 99.00th=[53740], 99.50th=[55313], 99.90th=[91751], 99.95th=[94897], 00:40:31.209 | 99.99th=[94897] 00:40:31.209 bw ( KiB/s): min=16896, max=34816, per=37.03%, avg=28646.40, stdev=5409.69, samples=10 00:40:31.209 iops : min= 132, max= 272, avg=223.80, stdev=42.26, samples=10 00:40:31.209 lat (msec) : 10=61.64%, 20=29.26%, 50=1.52%, 100=7.58% 00:40:31.209 cpu : usr=92.81%, sys=6.80%, ctx=37, majf=0, minf=175 00:40:31.209 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.209 issued rwts: total=1121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:31.209 filename0: (groupid=0, jobs=1): err= 0: pid=1671881: Mon Jun 10 14:08:44 2024 00:40:31.209 read: IOPS=189, BW=23.6MiB/s (24.8MB/s)(119MiB/5045msec) 00:40:31.209 slat (nsec): min=5987, max=29621, avg=9139.52, stdev=2640.16 00:40:31.209 clat (usec): min=5167, max=94915, avg=15846.90, stdev=15482.59 00:40:31.209 lat (usec): min=5173, max=94923, avg=15856.04, stdev=15482.82 00:40:31.209 clat percentiles (usec): 00:40:31.209 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7898], 00:40:31.209 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10421], 00:40:31.209 | 70.00th=[11994], 80.00th=[13173], 90.00th=[51119], 95.00th=[53216], 00:40:31.209 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:40:31.209 | 99.99th=[94897] 00:40:31.209 bw ( KiB/s): min=16128, max=31488, per=31.46%, avg=24339.40, stdev=6625.79, samples=10 00:40:31.209 iops : min= 126, max= 246, avg=190.10, stdev=51.71, samples=10 00:40:31.209 lat (msec) : 10=54.72%, 20=30.61%, 50=2.10%, 100=12.58% 00:40:31.209 cpu : usr=93.08%, sys=6.54%, ctx=12, majf=0, minf=97 00:40:31.209 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.209 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:31.209 filename0: (groupid=0, jobs=1): err= 0: pid=1671882: Mon Jun 10 14:08:44 2024 00:40:31.209 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(122MiB/5003msec) 00:40:31.209 slat (nsec): min=5988, max=25085, avg=9422.04, stdev=2434.89 00:40:31.209 clat (usec): min=4685, max=93809, avg=15363.38, stdev=14841.15 00:40:31.209 lat (usec): min=4697, max=93816, avg=15372.80, stdev=14841.33 00:40:31.209 clat percentiles (usec): 00:40:31.209 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 8586], 00:40:31.209 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10552], 00:40:31.209 | 70.00th=[11731], 80.00th=[13173], 90.00th=[50594], 95.00th=[52691], 00:40:31.209 | 99.00th=[56361], 99.50th=[60556], 99.90th=[93848], 99.95th=[93848], 00:40:31.209 | 99.99th=[93848] 00:40:31.209 bw ( KiB/s): min=19200, max=33024, per=32.20%, avg=24913.70, stdev=5380.49, samples=10 00:40:31.209 iops : min= 150, max= 258, avg=194.60, stdev=42.04, samples=10 00:40:31.209 lat (msec) : 10=52.25%, 20=35.14%, 50=1.33%, 100=11.27% 00:40:31.209 cpu : usr=92.34%, sys=7.32%, ctx=15, majf=0, minf=86 00:40:31.209 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.209 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.210 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:31.210 00:40:31.210 Run status group 0 (all jobs): 00:40:31.210 READ: bw=75.5MiB/s (79.2MB/s), 23.6MiB/s-27.8MiB/s (24.8MB/s-29.1MB/s), io=381MiB (400MB), run=5003-5048msec 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 bdev_null0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 [2024-06-10 14:08:44.689721] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 bdev_null1 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 bdev_null2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:31.210 { 00:40:31.210 "params": { 00:40:31.210 "name": "Nvme$subsystem", 00:40:31.210 "trtype": "$TEST_TRANSPORT", 00:40:31.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.210 "adrfam": "ipv4", 00:40:31.210 "trsvcid": "$NVMF_PORT", 00:40:31.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.210 "hdgst": ${hdgst:-false}, 00:40:31.210 "ddgst": ${ddgst:-false} 00:40:31.210 }, 00:40:31.210 "method": "bdev_nvme_attach_controller" 00:40:31.210 } 00:40:31.210 EOF 00:40:31.210 )") 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:31.210 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:31.211 { 00:40:31.211 "params": { 00:40:31.211 "name": "Nvme$subsystem", 00:40:31.211 "trtype": "$TEST_TRANSPORT", 00:40:31.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.211 "adrfam": "ipv4", 00:40:31.211 "trsvcid": "$NVMF_PORT", 00:40:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.211 "hdgst": ${hdgst:-false}, 00:40:31.211 "ddgst": ${ddgst:-false} 00:40:31.211 }, 00:40:31.211 "method": "bdev_nvme_attach_controller" 00:40:31.211 } 00:40:31.211 EOF 00:40:31.211 )") 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:31.211 { 00:40:31.211 "params": { 00:40:31.211 "name": "Nvme$subsystem", 00:40:31.211 "trtype": "$TEST_TRANSPORT", 00:40:31.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.211 "adrfam": "ipv4", 00:40:31.211 "trsvcid": "$NVMF_PORT", 00:40:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.211 "hdgst": ${hdgst:-false}, 00:40:31.211 "ddgst": ${ddgst:-false} 00:40:31.211 }, 00:40:31.211 "method": "bdev_nvme_attach_controller" 00:40:31.211 } 00:40:31.211 EOF 00:40:31.211 )") 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:31.211 "params": { 00:40:31.211 "name": "Nvme0", 00:40:31.211 "trtype": "tcp", 00:40:31.211 "traddr": "10.0.0.2", 00:40:31.211 "adrfam": "ipv4", 00:40:31.211 "trsvcid": "4420", 00:40:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:31.211 "hdgst": false, 00:40:31.211 "ddgst": false 00:40:31.211 }, 00:40:31.211 "method": "bdev_nvme_attach_controller" 00:40:31.211 },{ 00:40:31.211 "params": { 00:40:31.211 "name": "Nvme1", 00:40:31.211 "trtype": "tcp", 00:40:31.211 "traddr": "10.0.0.2", 00:40:31.211 "adrfam": "ipv4", 00:40:31.211 "trsvcid": "4420", 00:40:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:31.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:31.211 "hdgst": false, 00:40:31.211 "ddgst": false 00:40:31.211 }, 00:40:31.211 "method": "bdev_nvme_attach_controller" 00:40:31.211 },{ 00:40:31.211 "params": { 00:40:31.211 "name": "Nvme2", 00:40:31.211 "trtype": "tcp", 00:40:31.211 "traddr": "10.0.0.2", 00:40:31.211 "adrfam": "ipv4", 00:40:31.211 "trsvcid": "4420", 00:40:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:31.211 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:31.211 "hdgst": false, 00:40:31.211 "ddgst": false 00:40:31.211 }, 00:40:31.211 "method": "bdev_nvme_attach_controller" 00:40:31.211 }' 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:31.211 14:08:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.211 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:31.211 ... 00:40:31.211 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:31.211 ... 00:40:31.211 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:31.211 ... 00:40:31.211 fio-3.35 00:40:31.211 Starting 24 threads 00:40:31.211 EAL: No free 2048 kB hugepages reported on node 1 00:40:43.430 00:40:43.430 filename0: (groupid=0, jobs=1): err= 0: pid=1673160: Mon Jun 10 14:08:56 2024 00:40:43.430 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10012msec) 00:40:43.430 slat (usec): min=8, max=102, avg=21.29, stdev= 9.78 00:40:43.430 clat (usec): min=4368, max=62480, avg=33410.20, stdev=2759.17 00:40:43.430 lat (usec): min=4377, max=62489, avg=33431.50, stdev=2759.91 00:40:43.430 clat percentiles (usec): 00:40:43.430 | 1.00th=[19530], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.430 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.430 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.430 | 99.00th=[36439], 99.50th=[36963], 99.90th=[62653], 99.95th=[62653], 00:40:43.430 | 99.99th=[62653] 00:40:43.430 bw ( KiB/s): min= 1792, max= 2072, per=4.17%, avg=1902.00, stdev=65.76, samples=20 00:40:43.430 iops : min= 448, max= 518, avg=475.50, stdev=16.44, samples=20 00:40:43.430 lat (msec) : 10=0.27%, 20=0.84%, 50=98.43%, 100=0.46% 00:40:43.430 cpu : usr=96.65%, sys=2.94%, ctx=14, majf=0, minf=55 00:40:43.430 IO depths : 1=5.7%, 2=11.5%, 4=23.9%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:40:43.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 issued rwts: total=4771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.430 filename0: (groupid=0, jobs=1): err= 0: pid=1673161: Mon Jun 10 14:08:56 2024 00:40:43.430 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10001msec) 00:40:43.430 slat (nsec): min=8264, max=63671, avg=13271.02, stdev=4961.63 00:40:43.430 clat (usec): min=5833, max=62838, avg=32791.97, stdev=4193.41 00:40:43.430 lat (usec): min=5845, max=62847, avg=32805.24, stdev=4193.78 00:40:43.430 clat percentiles (usec): 00:40:43.430 | 1.00th=[10552], 5.00th=[28181], 10.00th=[32113], 20.00th=[32900], 00:40:43.430 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.430 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.430 | 99.00th=[36963], 99.50th=[38011], 99.90th=[62653], 99.95th=[62653], 00:40:43.430 | 99.99th=[62653] 00:40:43.430 bw ( KiB/s): min= 1792, max= 2224, per=4.27%, avg=1946.53, stdev=117.88, samples=19 00:40:43.430 iops : min= 448, max= 556, avg=486.63, stdev=29.47, samples=19 00:40:43.430 lat (msec) : 10=0.99%, 20=2.16%, 50=96.55%, 100=0.31% 00:40:43.430 cpu : usr=96.92%, sys=2.69%, ctx=15, majf=0, minf=46 00:40:43.430 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:40:43.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 issued rwts: total=4863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.430 filename0: (groupid=0, jobs=1): err= 0: pid=1673162: Mon Jun 10 14:08:56 2024 00:40:43.430 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:40:43.430 slat (nsec): min=6847, max=75776, avg=29530.37, stdev=12613.55 00:40:43.430 clat (usec): min=7241, max=68403, avg=33404.10, stdev=2480.01 00:40:43.430 lat (usec): min=7257, max=68417, avg=33433.63, stdev=2479.85 00:40:43.430 clat percentiles (usec): 00:40:43.430 | 1.00th=[28967], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.430 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.430 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.430 | 99.00th=[36439], 99.50th=[39060], 99.90th=[55313], 99.95th=[68682], 00:40:43.430 | 99.99th=[68682] 00:40:43.430 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:40:43.430 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:40:43.430 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:40:43.430 cpu : usr=96.69%, sys=2.90%, ctx=14, majf=0, minf=69 00:40:43.430 IO depths : 1=6.2%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:43.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.430 filename0: (groupid=0, jobs=1): err= 0: pid=1673163: Mon Jun 10 14:08:56 2024 00:40:43.430 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:40:43.430 slat (nsec): min=4280, max=81236, avg=20530.41, stdev=14044.38 00:40:43.430 clat (usec): min=7208, max=55657, avg=33699.13, stdev=3244.31 00:40:43.430 lat (usec): min=7217, max=55670, avg=33719.66, stdev=3242.60 00:40:43.430 clat percentiles (usec): 00:40:43.430 | 1.00th=[23200], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:40:43.430 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:40:43.430 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:40:43.430 | 99.00th=[46924], 99.50th=[50594], 99.90th=[55837], 99.95th=[55837], 00:40:43.430 | 99.99th=[55837] 00:40:43.430 bw ( KiB/s): min= 1664, max= 1920, per=4.12%, avg=1879.58, stdev=62.51, samples=19 00:40:43.430 iops : min= 416, max= 480, avg=469.89, stdev=15.63, samples=19 00:40:43.430 lat (msec) : 10=0.34%, 20=0.34%, 50=98.82%, 100=0.51% 00:40:43.430 cpu : usr=96.56%, sys=3.02%, ctx=16, majf=0, minf=61 00:40:43.430 IO depths : 1=0.1%, 2=1.0%, 4=5.2%, 8=76.5%, 16=17.3%, 32=0.0%, >=64=0.0% 00:40:43.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 complete : 0=0.0%, 4=90.5%, 8=8.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.430 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.430 filename0: (groupid=0, jobs=1): err= 0: pid=1673164: Mon Jun 10 14:08:56 2024 00:40:43.430 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10012msec) 00:40:43.430 slat (nsec): min=8488, max=85750, avg=23606.85, stdev=12379.78 00:40:43.430 clat (usec): min=12581, max=52013, avg=33310.29, stdev=3479.24 00:40:43.430 lat (usec): min=12592, max=52051, avg=33333.90, stdev=3480.45 00:40:43.430 clat percentiles (usec): 00:40:43.430 | 1.00th=[21627], 5.00th=[27132], 10.00th=[32637], 20.00th=[33162], 00:40:43.430 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.430 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:40:43.430 | 99.00th=[44827], 99.50th=[45351], 99.90th=[52167], 99.95th=[52167], 00:40:43.430 | 99.99th=[52167] 00:40:43.430 bw ( KiB/s): min= 1792, max= 2176, per=4.18%, avg=1907.20, stdev=93.24, samples=20 00:40:43.430 iops : min= 448, max= 544, avg=476.80, stdev=23.31, samples=20 00:40:43.430 lat (msec) : 20=0.88%, 50=98.79%, 100=0.33% 00:40:43.430 cpu : usr=96.64%, sys=2.94%, ctx=22, majf=0, minf=53 00:40:43.431 IO depths : 1=4.1%, 2=8.8%, 4=20.8%, 8=57.7%, 16=8.5%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename0: (groupid=0, jobs=1): err= 0: pid=1673165: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10004msec) 00:40:43.431 slat (nsec): min=6473, max=79175, avg=21936.59, stdev=13339.89 00:40:43.431 clat (usec): min=10856, max=65304, avg=33794.09, stdev=2842.86 00:40:43.431 lat (usec): min=10870, max=65321, avg=33816.03, stdev=2841.46 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[28967], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.431 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:40:43.431 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.431 | 99.00th=[43779], 99.50th=[57410], 99.90th=[65274], 99.95th=[65274], 00:40:43.431 | 99.99th=[65274] 00:40:43.431 bw ( KiB/s): min= 1667, max= 1920, per=4.12%, avg=1878.05, stdev=65.80, samples=19 00:40:43.431 iops : min= 416, max= 480, avg=469.47, stdev=16.58, samples=19 00:40:43.431 lat (msec) : 20=0.36%, 50=98.85%, 100=0.78% 00:40:43.431 cpu : usr=96.56%, sys=3.02%, ctx=16, majf=0, minf=64 00:40:43.431 IO depths : 1=1.9%, 2=4.0%, 4=9.8%, 8=70.1%, 16=14.2%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=91.2%, 8=6.4%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename0: (groupid=0, jobs=1): err= 0: pid=1673166: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:40:43.431 slat (usec): min=11, max=104, avg=35.82, stdev=11.94 00:40:43.431 clat (usec): min=19911, max=35355, avg=33403.98, stdev=942.04 00:40:43.431 lat (usec): min=19928, max=35376, avg=33439.80, stdev=941.86 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.431 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.431 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.431 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:40:43.431 | 99.99th=[35390] 00:40:43.431 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1894.40, stdev=52.53, samples=20 00:40:43.431 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:40:43.431 lat (msec) : 20=0.13%, 50=99.87% 00:40:43.431 cpu : usr=96.54%, sys=3.05%, ctx=14, majf=0, minf=50 00:40:43.431 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename0: (groupid=0, jobs=1): err= 0: pid=1673167: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10001msec) 00:40:43.431 slat (nsec): min=8349, max=77972, avg=22828.56, stdev=12896.24 00:40:43.431 clat (usec): min=11919, max=66065, avg=33253.60, stdev=4459.21 00:40:43.431 lat (usec): min=11943, max=66090, avg=33276.43, stdev=4460.01 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[13698], 5.00th=[26608], 10.00th=[32113], 20.00th=[32900], 00:40:43.431 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.431 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35914], 00:40:43.431 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:40:43.431 | 99.99th=[66323] 00:40:43.431 bw ( KiB/s): min= 1712, max= 2064, per=4.20%, avg=1914.11, stdev=87.30, samples=19 00:40:43.431 iops : min= 428, max= 516, avg=478.53, stdev=21.83, samples=19 00:40:43.431 lat (msec) : 20=1.78%, 50=96.62%, 100=1.61% 00:40:43.431 cpu : usr=96.61%, sys=2.97%, ctx=15, majf=0, minf=63 00:40:43.431 IO depths : 1=3.3%, 2=6.8%, 4=19.0%, 8=60.8%, 16=10.2%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=92.8%, 8=2.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename1: (groupid=0, jobs=1): err= 0: pid=1673168: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10008msec) 00:40:43.431 slat (usec): min=6, max=125, avg=30.86, stdev=13.57 00:40:43.431 clat (usec): min=14451, max=64088, avg=33559.25, stdev=2377.69 00:40:43.431 lat (usec): min=14469, max=64105, avg=33590.11, stdev=2375.84 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[29492], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.431 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.431 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.431 | 99.00th=[38011], 99.50th=[44303], 99.90th=[64226], 99.95th=[64226], 00:40:43.431 | 99.99th=[64226] 00:40:43.431 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1884.63, stdev=74.59, samples=19 00:40:43.431 iops : min= 416, max= 480, avg=471.16, stdev=18.65, samples=19 00:40:43.431 lat (msec) : 20=0.34%, 50=99.24%, 100=0.42% 00:40:43.431 cpu : usr=96.63%, sys=2.96%, ctx=15, majf=0, minf=62 00:40:43.431 IO depths : 1=5.3%, 2=10.7%, 4=21.8%, 8=54.4%, 16=7.9%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename1: (groupid=0, jobs=1): err= 0: pid=1673169: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10006msec) 00:40:43.431 slat (nsec): min=5353, max=76685, avg=21455.57, stdev=13329.08 00:40:43.431 clat (usec): min=6383, max=89155, avg=33814.69, stdev=5232.97 00:40:43.431 lat (usec): min=6392, max=89168, avg=33836.15, stdev=5232.29 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[14877], 5.00th=[29754], 10.00th=[32375], 20.00th=[32900], 00:40:43.431 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:40:43.431 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[39060], 00:40:43.431 | 99.00th=[53216], 99.50th=[61080], 99.90th=[88605], 99.95th=[88605], 00:40:43.431 | 99.99th=[89654] 00:40:43.431 bw ( KiB/s): min= 1664, max= 2000, per=4.10%, avg=1870.32, stdev=70.73, samples=19 00:40:43.431 iops : min= 416, max= 500, avg=467.58, stdev=17.68, samples=19 00:40:43.431 lat (msec) : 10=0.55%, 20=1.55%, 50=96.12%, 100=1.78% 00:40:43.431 cpu : usr=96.63%, sys=2.95%, ctx=19, majf=0, minf=90 00:40:43.431 IO depths : 1=0.3%, 2=4.2%, 4=17.9%, 8=64.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=92.9%, 8=2.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename1: (groupid=0, jobs=1): err= 0: pid=1673170: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:40:43.431 slat (usec): min=11, max=182, avg=35.06, stdev=11.93 00:40:43.431 clat (usec): min=19923, max=35392, avg=33419.49, stdev=939.06 00:40:43.431 lat (usec): min=19941, max=35411, avg=33454.56, stdev=938.59 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.431 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.431 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.431 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:40:43.431 | 99.99th=[35390] 00:40:43.431 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1894.40, stdev=52.53, samples=20 00:40:43.431 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:40:43.431 lat (msec) : 20=0.08%, 50=99.92% 00:40:43.431 cpu : usr=96.53%, sys=3.06%, ctx=14, majf=0, minf=54 00:40:43.431 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.431 filename1: (groupid=0, jobs=1): err= 0: pid=1673171: Mon Jun 10 14:08:56 2024 00:40:43.431 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.5MiB/10012msec) 00:40:43.431 slat (usec): min=8, max=129, avg=31.42, stdev=12.21 00:40:43.431 clat (usec): min=14080, max=59679, avg=33515.45, stdev=1896.90 00:40:43.431 lat (usec): min=14090, max=59690, avg=33546.88, stdev=1896.85 00:40:43.431 clat percentiles (usec): 00:40:43.431 | 1.00th=[29754], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:40:43.431 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.431 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.431 | 99.00th=[36439], 99.50th=[38011], 99.90th=[59507], 99.95th=[59507], 00:40:43.431 | 99.99th=[59507] 00:40:43.431 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1890.40, stdev=51.98, samples=20 00:40:43.431 iops : min= 448, max= 480, avg=472.60, stdev=13.00, samples=20 00:40:43.431 lat (msec) : 20=0.17%, 50=99.49%, 100=0.34% 00:40:43.431 cpu : usr=96.71%, sys=2.87%, ctx=15, majf=0, minf=47 00:40:43.431 IO depths : 1=5.2%, 2=11.1%, 4=24.4%, 8=51.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:40:43.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.431 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename1: (groupid=0, jobs=1): err= 0: pid=1673172: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:40:43.432 slat (nsec): min=8475, max=83209, avg=35365.82, stdev=12534.51 00:40:43.432 clat (usec): min=19371, max=48531, avg=33397.00, stdev=1509.12 00:40:43.432 lat (usec): min=19382, max=48544, avg=33432.37, stdev=1509.73 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[29754], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.432 | 99.00th=[35390], 99.50th=[37487], 99.90th=[47973], 99.95th=[48497], 00:40:43.432 | 99.99th=[48497] 00:40:43.432 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1894.40, stdev=49.97, samples=20 00:40:43.432 iops : min= 448, max= 480, avg=473.60, stdev=12.49, samples=20 00:40:43.432 lat (msec) : 20=0.15%, 50=99.85% 00:40:43.432 cpu : usr=96.33%, sys=3.26%, ctx=15, majf=0, minf=55 00:40:43.432 IO depths : 1=5.6%, 2=11.2%, 4=24.4%, 8=51.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename1: (groupid=0, jobs=1): err= 0: pid=1673174: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10001msec) 00:40:43.432 slat (usec): min=8, max=103, avg=24.17, stdev=10.50 00:40:43.432 clat (usec): min=7532, max=36829, avg=33259.10, stdev=2591.60 00:40:43.432 lat (usec): min=7540, max=36844, avg=33283.27, stdev=2592.15 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[17957], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.432 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:40:43.432 | 99.99th=[36963] 00:40:43.432 bw ( KiB/s): min= 1792, max= 2176, per=4.20%, avg=1913.26, stdev=79.52, samples=19 00:40:43.432 iops : min= 448, max= 544, avg=478.32, stdev=19.88, samples=19 00:40:43.432 lat (msec) : 10=0.67%, 20=0.77%, 50=98.56% 00:40:43.432 cpu : usr=96.74%, sys=2.85%, ctx=8, majf=0, minf=58 00:40:43.432 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename1: (groupid=0, jobs=1): err= 0: pid=1673175: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10003msec) 00:40:43.432 slat (nsec): min=6500, max=76048, avg=26525.85, stdev=12082.11 00:40:43.432 clat (usec): min=14713, max=64242, avg=33538.74, stdev=2190.69 00:40:43.432 lat (usec): min=14731, max=64259, avg=33565.26, stdev=2190.06 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.432 | 99.00th=[35390], 99.50th=[36963], 99.90th=[64226], 99.95th=[64226], 00:40:43.432 | 99.99th=[64226] 00:40:43.432 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:40:43.432 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:40:43.432 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:40:43.432 cpu : usr=96.32%, sys=3.28%, ctx=14, majf=0, minf=47 00:40:43.432 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename1: (groupid=0, jobs=1): err= 0: pid=1673176: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:40:43.432 slat (nsec): min=4588, max=86584, avg=31422.33, stdev=12739.34 00:40:43.432 clat (usec): min=7271, max=55555, avg=33380.89, stdev=2346.04 00:40:43.432 lat (usec): min=7280, max=55567, avg=33412.31, stdev=2345.84 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[29492], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.432 | 99.00th=[34866], 99.50th=[39060], 99.90th=[55313], 99.95th=[55313], 00:40:43.432 | 99.99th=[55313] 00:40:43.432 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:40:43.432 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:40:43.432 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:40:43.432 cpu : usr=96.88%, sys=2.70%, ctx=11, majf=0, minf=47 00:40:43.432 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename2: (groupid=0, jobs=1): err= 0: pid=1673177: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:40:43.432 slat (nsec): min=8996, max=85850, avg=30120.66, stdev=12869.02 00:40:43.432 clat (usec): min=28739, max=52091, avg=33591.01, stdev=1345.77 00:40:43.432 lat (usec): min=28749, max=52118, avg=33621.13, stdev=1344.44 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[30278], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.432 | 99.00th=[37487], 99.50th=[38536], 99.90th=[52167], 99.95th=[52167], 00:40:43.432 | 99.99th=[52167] 00:40:43.432 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1888.00, stdev=56.87, samples=20 00:40:43.432 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:40:43.432 lat (msec) : 50=99.66%, 100=0.34% 00:40:43.432 cpu : usr=96.58%, sys=2.96%, ctx=14, majf=0, minf=57 00:40:43.432 IO depths : 1=5.7%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename2: (groupid=0, jobs=1): err= 0: pid=1673178: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10008msec) 00:40:43.432 slat (nsec): min=6614, max=80270, avg=33864.57, stdev=12535.91 00:40:43.432 clat (usec): min=14525, max=69041, avg=33513.95, stdev=2383.70 00:40:43.432 lat (usec): min=14563, max=69057, avg=33547.81, stdev=2383.04 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[29230], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.432 | 99.00th=[38536], 99.50th=[40633], 99.90th=[64226], 99.95th=[68682], 00:40:43.432 | 99.99th=[68682] 00:40:43.432 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:40:43.432 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:40:43.432 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:40:43.432 cpu : usr=96.61%, sys=2.98%, ctx=25, majf=0, minf=41 00:40:43.432 IO depths : 1=5.5%, 2=11.2%, 4=24.0%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename2: (groupid=0, jobs=1): err= 0: pid=1673179: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10004msec) 00:40:43.432 slat (nsec): min=6432, max=76982, avg=32377.83, stdev=12537.25 00:40:43.432 clat (usec): min=14477, max=60968, avg=33490.34, stdev=2035.33 00:40:43.432 lat (usec): min=14505, max=60984, avg=33522.71, stdev=2034.10 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.432 | 99.00th=[35390], 99.50th=[39584], 99.90th=[61080], 99.95th=[61080], 00:40:43.432 | 99.99th=[61080] 00:40:43.432 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:40:43.432 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:40:43.432 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:40:43.432 cpu : usr=96.83%, sys=2.79%, ctx=11, majf=0, minf=53 00:40:43.432 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.432 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.432 filename2: (groupid=0, jobs=1): err= 0: pid=1673180: Mon Jun 10 14:08:56 2024 00:40:43.432 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:40:43.432 slat (nsec): min=8357, max=79762, avg=21972.63, stdev=12605.73 00:40:43.432 clat (usec): min=16969, max=51992, avg=33523.45, stdev=1709.83 00:40:43.432 lat (usec): min=16986, max=52005, avg=33545.42, stdev=1708.98 00:40:43.432 clat percentiles (usec): 00:40:43.432 | 1.00th=[29492], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:40:43.432 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.432 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.433 | 99.00th=[35390], 99.50th=[40633], 99.90th=[49546], 99.95th=[49546], 00:40:43.433 | 99.99th=[52167] 00:40:43.433 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1894.40, stdev=52.53, samples=20 00:40:43.433 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:40:43.433 lat (msec) : 20=0.44%, 50=99.54%, 100=0.02% 00:40:43.433 cpu : usr=96.73%, sys=2.87%, ctx=16, majf=0, minf=48 00:40:43.433 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:43.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.433 filename2: (groupid=0, jobs=1): err= 0: pid=1673181: Mon Jun 10 14:08:56 2024 00:40:43.433 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.7MiB/10030msec) 00:40:43.433 slat (nsec): min=8262, max=55094, avg=14437.14, stdev=5684.97 00:40:43.433 clat (usec): min=13360, max=59288, avg=33458.03, stdev=3624.15 00:40:43.433 lat (usec): min=13369, max=59297, avg=33472.46, stdev=3624.22 00:40:43.433 clat percentiles (usec): 00:40:43.433 | 1.00th=[18220], 5.00th=[25822], 10.00th=[32637], 20.00th=[33162], 00:40:43.433 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:40:43.433 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[38536], 00:40:43.433 | 99.00th=[44303], 99.50th=[44303], 99.90th=[52167], 99.95th=[59507], 00:40:43.433 | 99.99th=[59507] 00:40:43.433 bw ( KiB/s): min= 1760, max= 2048, per=4.18%, avg=1905.60, stdev=60.96, samples=20 00:40:43.433 iops : min= 440, max= 512, avg=476.40, stdev=15.24, samples=20 00:40:43.433 lat (msec) : 20=1.34%, 50=98.49%, 100=0.17% 00:40:43.433 cpu : usr=96.86%, sys=2.73%, ctx=14, majf=0, minf=56 00:40:43.433 IO depths : 1=4.6%, 2=10.0%, 4=22.6%, 8=54.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:40:43.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 issued rwts: total=4780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.433 filename2: (groupid=0, jobs=1): err= 0: pid=1673182: Mon Jun 10 14:08:56 2024 00:40:43.433 read: IOPS=475, BW=1901KiB/s (1947kB/s)(18.6MiB/10012msec) 00:40:43.433 slat (usec): min=8, max=130, avg=32.75, stdev=12.49 00:40:43.433 clat (usec): min=5468, max=55646, avg=33398.46, stdev=1985.13 00:40:43.433 lat (usec): min=5477, max=55677, avg=33431.21, stdev=1986.28 00:40:43.433 clat percentiles (usec): 00:40:43.433 | 1.00th=[28181], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:40:43.433 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:43.433 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:40:43.433 | 99.00th=[38011], 99.50th=[40109], 99.90th=[46400], 99.95th=[48497], 00:40:43.433 | 99.99th=[55837] 00:40:43.433 bw ( KiB/s): min= 1792, max= 1968, per=4.16%, avg=1896.80, stdev=54.81, samples=20 00:40:43.433 iops : min= 448, max= 492, avg=474.20, stdev=13.70, samples=20 00:40:43.433 lat (msec) : 10=0.13%, 20=0.15%, 50=99.68%, 100=0.04% 00:40:43.433 cpu : usr=96.48%, sys=3.11%, ctx=16, majf=0, minf=52 00:40:43.433 IO depths : 1=4.6%, 2=10.1%, 4=23.8%, 8=53.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:40:43.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.433 filename2: (groupid=0, jobs=1): err= 0: pid=1673183: Mon Jun 10 14:08:56 2024 00:40:43.433 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:40:43.433 slat (usec): min=8, max=136, avg=28.91, stdev=11.44 00:40:43.433 clat (usec): min=20026, max=47943, avg=33486.31, stdev=1074.61 00:40:43.433 lat (usec): min=20049, max=47975, avg=33515.22, stdev=1073.82 00:40:43.433 clat percentiles (usec): 00:40:43.433 | 1.00th=[31589], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:40:43.433 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:43.433 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:40:43.433 | 99.00th=[35390], 99.50th=[35914], 99.90th=[38011], 99.95th=[47449], 00:40:43.433 | 99.99th=[47973] 00:40:43.433 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1894.40, stdev=52.53, samples=20 00:40:43.433 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:40:43.433 lat (msec) : 50=100.00% 00:40:43.433 cpu : usr=96.65%, sys=2.94%, ctx=15, majf=0, minf=36 00:40:43.433 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:43.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.433 filename2: (groupid=0, jobs=1): err= 0: pid=1673184: Mon Jun 10 14:08:56 2024 00:40:43.433 read: IOPS=503, BW=2012KiB/s (2061kB/s)(19.7MiB/10038msec) 00:40:43.433 slat (nsec): min=4622, max=79137, avg=13329.54, stdev=7108.36 00:40:43.433 clat (usec): min=4729, max=65775, avg=31683.39, stdev=8185.90 00:40:43.433 lat (usec): min=4738, max=65789, avg=31696.72, stdev=8186.97 00:40:43.433 clat percentiles (usec): 00:40:43.433 | 1.00th=[ 8225], 5.00th=[16581], 10.00th=[20579], 20.00th=[26608], 00:40:43.433 | 30.00th=[31065], 40.00th=[32900], 50.00th=[33424], 60.00th=[33424], 00:40:43.433 | 70.00th=[33817], 80.00th=[34341], 90.00th=[37487], 95.00th=[43254], 00:40:43.433 | 99.00th=[60031], 99.50th=[62653], 99.90th=[65799], 99.95th=[65799], 00:40:43.433 | 99.99th=[65799] 00:40:43.433 bw ( KiB/s): min= 1792, max= 2400, per=4.41%, avg=2013.60, stdev=176.59, samples=20 00:40:43.433 iops : min= 448, max= 600, avg=503.40, stdev=44.15, samples=20 00:40:43.433 lat (msec) : 10=1.47%, 20=7.21%, 50=88.20%, 100=3.13% 00:40:43.433 cpu : usr=96.51%, sys=3.04%, ctx=19, majf=0, minf=109 00:40:43.433 IO depths : 1=1.7%, 2=3.4%, 4=12.1%, 8=71.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:40:43.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.433 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:43.433 00:40:43.433 Run status group 0 (all jobs): 00:40:43.433 READ: bw=44.5MiB/s (46.7MB/s), 1884KiB/s-2012KiB/s (1930kB/s-2061kB/s), io=447MiB (469MB), run=10001-10038msec 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:43.433 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 bdev_null0 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 [2024-06-10 14:08:56.533252] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 bdev_null1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:43.434 { 00:40:43.434 "params": { 00:40:43.434 "name": "Nvme$subsystem", 00:40:43.434 "trtype": "$TEST_TRANSPORT", 00:40:43.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:43.434 "adrfam": "ipv4", 00:40:43.434 "trsvcid": "$NVMF_PORT", 00:40:43.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:43.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:43.434 "hdgst": ${hdgst:-false}, 00:40:43.434 "ddgst": ${ddgst:-false} 00:40:43.434 }, 00:40:43.434 "method": "bdev_nvme_attach_controller" 00:40:43.434 } 00:40:43.434 EOF 00:40:43.434 )") 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:43.434 { 00:40:43.434 "params": { 00:40:43.434 "name": "Nvme$subsystem", 00:40:43.434 "trtype": "$TEST_TRANSPORT", 00:40:43.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:43.434 "adrfam": "ipv4", 00:40:43.434 "trsvcid": "$NVMF_PORT", 00:40:43.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:43.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:43.434 "hdgst": ${hdgst:-false}, 00:40:43.434 "ddgst": ${ddgst:-false} 00:40:43.434 }, 00:40:43.434 "method": "bdev_nvme_attach_controller" 00:40:43.434 } 00:40:43.434 EOF 00:40:43.434 )") 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:43.434 "params": { 00:40:43.434 "name": "Nvme0", 00:40:43.434 "trtype": "tcp", 00:40:43.434 "traddr": "10.0.0.2", 00:40:43.434 "adrfam": "ipv4", 00:40:43.434 "trsvcid": "4420", 00:40:43.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:43.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:43.434 "hdgst": false, 00:40:43.434 "ddgst": false 00:40:43.434 }, 00:40:43.434 "method": "bdev_nvme_attach_controller" 00:40:43.434 },{ 00:40:43.434 "params": { 00:40:43.434 "name": "Nvme1", 00:40:43.434 "trtype": "tcp", 00:40:43.434 "traddr": "10.0.0.2", 00:40:43.434 "adrfam": "ipv4", 00:40:43.434 "trsvcid": "4420", 00:40:43.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:43.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:43.434 "hdgst": false, 00:40:43.434 "ddgst": false 00:40:43.434 }, 00:40:43.434 "method": "bdev_nvme_attach_controller" 00:40:43.434 }' 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:43.434 14:08:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:43.434 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:43.434 ... 00:40:43.434 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:43.434 ... 00:40:43.434 fio-3.35 00:40:43.434 Starting 4 threads 00:40:43.434 EAL: No free 2048 kB hugepages reported on node 1 00:40:48.748 00:40:48.748 filename0: (groupid=0, jobs=1): err= 0: pid=1675217: Mon Jun 10 14:09:02 2024 00:40:48.748 read: IOPS=1943, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5002msec) 00:40:48.748 slat (nsec): min=7653, max=31312, avg=11260.97, stdev=3191.38 00:40:48.748 clat (usec): min=1059, max=46038, avg=4084.48, stdev=1342.07 00:40:48.748 lat (usec): min=1068, max=46057, avg=4095.74, stdev=1341.97 00:40:48.748 clat percentiles (usec): 00:40:48.748 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3589], 20.00th=[ 3720], 00:40:48.748 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 3982], 00:40:48.748 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4883], 95.00th=[ 5473], 00:40:48.748 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 7767], 99.95th=[45876], 00:40:48.748 | 99.99th=[45876] 00:40:48.748 bw ( KiB/s): min=13915, max=16512, per=24.66%, avg=15569.22, stdev=758.85, samples=9 00:40:48.748 iops : min= 1739, max= 2064, avg=1946.11, stdev=94.96, samples=9 00:40:48.748 lat (msec) : 2=0.04%, 4=63.54%, 10=36.34%, 50=0.08% 00:40:48.748 cpu : usr=93.06%, sys=6.54%, ctx=7, majf=0, minf=39 00:40:48.748 IO depths : 1=0.2%, 2=1.6%, 4=70.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 issued rwts: total=9720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.748 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:48.748 filename0: (groupid=0, jobs=1): err= 0: pid=1675218: Mon Jun 10 14:09:02 2024 00:40:48.748 read: IOPS=2055, BW=16.1MiB/s (16.8MB/s)(80.4MiB/5004msec) 00:40:48.748 slat (nsec): min=8243, max=38754, avg=10870.28, stdev=2877.81 00:40:48.748 clat (usec): min=1251, max=43714, avg=3858.25, stdev=1294.05 00:40:48.748 lat (usec): min=1266, max=43753, avg=3869.12, stdev=1294.05 00:40:48.748 clat percentiles (usec): 00:40:48.748 | 1.00th=[ 2376], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3425], 00:40:48.748 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 3916], 00:40:48.748 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4359], 95.00th=[ 5604], 00:40:48.748 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6718], 99.95th=[43779], 00:40:48.748 | 99.99th=[43779] 00:40:48.748 bw ( KiB/s): min=15024, max=18368, per=26.09%, avg=16469.33, stdev=1161.87, samples=9 00:40:48.748 iops : min= 1878, max= 2296, avg=2058.67, stdev=145.23, samples=9 00:40:48.748 lat (msec) : 2=0.16%, 4=76.40%, 10=23.36%, 50=0.08% 00:40:48.748 cpu : usr=93.00%, sys=6.64%, ctx=6, majf=0, minf=26 00:40:48.748 IO depths : 1=0.2%, 2=4.4%, 4=68.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 issued rwts: total=10286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.748 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:48.748 filename1: (groupid=0, jobs=1): err= 0: pid=1675219: Mon Jun 10 14:09:02 2024 00:40:48.748 read: IOPS=1939, BW=15.1MiB/s (15.9MB/s)(75.8MiB/5002msec) 00:40:48.748 slat (usec): min=7, max=103, avg=11.24, stdev= 3.30 00:40:48.748 clat (usec): min=2063, max=43926, avg=4093.56, stdev=1303.79 00:40:48.748 lat (usec): min=2071, max=43947, avg=4104.81, stdev=1303.79 00:40:48.748 clat percentiles (usec): 00:40:48.748 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3720], 00:40:48.748 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 3982], 00:40:48.748 | 70.00th=[ 4015], 80.00th=[ 4178], 90.00th=[ 5014], 95.00th=[ 5669], 00:40:48.748 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[43779], 00:40:48.748 | 99.99th=[43779] 00:40:48.748 bw ( KiB/s): min=14624, max=16512, per=24.59%, avg=15525.33, stdev=708.44, samples=9 00:40:48.748 iops : min= 1828, max= 2064, avg=1940.67, stdev=88.56, samples=9 00:40:48.748 lat (msec) : 4=64.59%, 10=35.33%, 50=0.08% 00:40:48.748 cpu : usr=92.50%, sys=7.12%, ctx=7, majf=0, minf=59 00:40:48.748 IO depths : 1=0.1%, 2=1.1%, 4=71.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 issued rwts: total=9700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.748 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:48.748 filename1: (groupid=0, jobs=1): err= 0: pid=1675220: Mon Jun 10 14:09:02 2024 00:40:48.748 read: IOPS=1955, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5002msec) 00:40:48.748 slat (nsec): min=8246, max=54988, avg=11325.91, stdev=3379.51 00:40:48.748 clat (usec): min=1289, max=6784, avg=4058.68, stdev=742.65 00:40:48.748 lat (usec): min=1298, max=6793, avg=4070.01, stdev=742.29 00:40:48.748 clat percentiles (usec): 00:40:48.748 | 1.00th=[ 1582], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3720], 00:40:48.748 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:40:48.748 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 5211], 95.00th=[ 5735], 00:40:48.748 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6587], 99.95th=[ 6718], 00:40:48.748 | 99.99th=[ 6783] 00:40:48.748 bw ( KiB/s): min=15072, max=17548, per=24.69%, avg=15585.33, stdev=769.24, samples=9 00:40:48.748 iops : min= 1884, max= 2193, avg=1948.11, stdev=96.00, samples=9 00:40:48.748 lat (msec) : 2=1.96%, 4=64.44%, 10=33.60% 00:40:48.748 cpu : usr=92.42%, sys=7.18%, ctx=7, majf=0, minf=36 00:40:48.748 IO depths : 1=0.1%, 2=1.4%, 4=71.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.748 issued rwts: total=9780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.748 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:48.748 00:40:48.748 Run status group 0 (all jobs): 00:40:48.748 READ: bw=61.6MiB/s (64.6MB/s), 15.1MiB/s-16.1MiB/s (15.9MB/s-16.8MB/s), io=308MiB (323MB), run=5002-5004msec 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.748 00:40:48.748 real 0m24.735s 00:40:48.748 user 5m0.574s 00:40:48.748 sys 0m10.801s 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:48.748 14:09:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:48.748 ************************************ 00:40:48.748 END TEST fio_dif_rand_params 00:40:48.748 ************************************ 00:40:48.748 14:09:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:48.748 14:09:03 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:48.748 14:09:03 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:48.748 14:09:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:49.007 ************************************ 00:40:49.007 START TEST fio_dif_digest 00:40:49.007 ************************************ 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:49.007 bdev_null0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:49.007 [2024-06-10 14:09:03.254937] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:49.007 { 00:40:49.007 "params": { 00:40:49.007 "name": "Nvme$subsystem", 00:40:49.007 "trtype": "$TEST_TRANSPORT", 00:40:49.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:49.007 "adrfam": "ipv4", 00:40:49.007 "trsvcid": "$NVMF_PORT", 00:40:49.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:49.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:49.007 "hdgst": ${hdgst:-false}, 00:40:49.007 "ddgst": ${ddgst:-false} 00:40:49.007 }, 00:40:49.007 "method": "bdev_nvme_attach_controller" 00:40:49.007 } 00:40:49.007 EOF 00:40:49.007 )") 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:40:49.007 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:49.008 "params": { 00:40:49.008 "name": "Nvme0", 00:40:49.008 "trtype": "tcp", 00:40:49.008 "traddr": "10.0.0.2", 00:40:49.008 "adrfam": "ipv4", 00:40:49.008 "trsvcid": "4420", 00:40:49.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:49.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:49.008 "hdgst": true, 00:40:49.008 "ddgst": true 00:40:49.008 }, 00:40:49.008 "method": "bdev_nvme_attach_controller" 00:40:49.008 }' 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:49.008 14:09:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:49.266 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:49.266 ... 00:40:49.266 fio-3.35 00:40:49.266 Starting 3 threads 00:40:49.525 EAL: No free 2048 kB hugepages reported on node 1 00:41:01.727 00:41:01.727 filename0: (groupid=0, jobs=1): err= 0: pid=1676454: Mon Jun 10 14:09:14 2024 00:41:01.727 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10047msec) 00:41:01.727 slat (nsec): min=8655, max=42335, avg=14078.56, stdev=2273.68 00:41:01.727 clat (usec): min=6293, max=98694, avg=14382.50, stdev=3859.80 00:41:01.727 lat (usec): min=6303, max=98709, avg=14396.58, stdev=3859.95 00:41:01.727 clat percentiles (usec): 00:41:01.727 | 1.00th=[ 8979], 5.00th=[10683], 10.00th=[11994], 20.00th=[13173], 00:41:01.727 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:41:01.727 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:41:01.727 | 99.00th=[17433], 99.50th=[18744], 99.90th=[56361], 99.95th=[98042], 00:41:01.727 | 99.99th=[99091] 00:41:01.727 bw ( KiB/s): min=22528, max=29952, per=34.27%, avg=26726.40, stdev=1412.95, samples=20 00:41:01.727 iops : min= 176, max= 234, avg=208.80, stdev=11.04, samples=20 00:41:01.727 lat (msec) : 10=2.73%, 20=96.84%, 50=0.05%, 100=0.38% 00:41:01.727 cpu : usr=90.64%, sys=8.95%, ctx=15, majf=0, minf=123 00:41:01.727 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.727 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.727 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:01.727 filename0: (groupid=0, jobs=1): err= 0: pid=1676455: Mon Jun 10 14:09:14 2024 00:41:01.727 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10003msec) 00:41:01.727 slat (nsec): min=8691, max=33303, avg=13888.65, stdev=2434.06 00:41:01.727 clat (usec): min=6777, max=57652, avg=14097.29, stdev=4790.14 00:41:01.727 lat (usec): min=6787, max=57662, avg=14111.18, stdev=4790.25 00:41:01.727 clat percentiles (usec): 00:41:01.727 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[12649], 00:41:01.727 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:41:01.727 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[16057], 00:41:01.727 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:41:01.727 | 99.99th=[57410] 00:41:01.727 bw ( KiB/s): min=21504, max=31232, per=34.87%, avg=27189.89, stdev=2692.51, samples=19 00:41:01.727 iops : min= 168, max= 244, avg=212.42, stdev=21.04, samples=19 00:41:01.727 lat (msec) : 10=5.41%, 20=93.32%, 50=0.14%, 100=1.13% 00:41:01.727 cpu : usr=90.60%, sys=9.04%, ctx=15, majf=0, minf=189 00:41:01.727 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.727 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.727 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:01.727 filename0: (groupid=0, jobs=1): err= 0: pid=1676458: Mon Jun 10 14:09:14 2024 00:41:01.727 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10047msec) 00:41:01.727 slat (nsec): min=8633, max=50312, avg=14120.05, stdev=2242.08 00:41:01.727 clat (usec): min=8997, max=58898, avg=15783.01, stdev=6106.67 00:41:01.727 lat (usec): min=9012, max=58915, avg=15797.13, stdev=6106.69 00:41:01.727 clat percentiles (usec): 00:41:01.727 | 1.00th=[10421], 5.00th=[12256], 10.00th=[13304], 20.00th=[13960], 00:41:01.727 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:41:01.727 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16712], 95.00th=[17171], 00:41:01.727 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58459], 99.95th=[58983], 00:41:01.727 | 99.99th=[58983] 00:41:01.727 bw ( KiB/s): min=22784, max=26112, per=31.24%, avg=24358.40, stdev=1099.61, samples=20 00:41:01.727 iops : min= 178, max= 204, avg=190.30, stdev= 8.59, samples=20 00:41:01.727 lat (msec) : 10=0.42%, 20=97.43%, 50=0.05%, 100=2.10% 00:41:01.727 cpu : usr=90.92%, sys=8.70%, ctx=17, majf=0, minf=174 00:41:01.727 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.727 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.727 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:01.727 00:41:01.727 Run status group 0 (all jobs): 00:41:01.727 READ: bw=76.2MiB/s (79.9MB/s), 23.7MiB/s-26.6MiB/s (24.9MB/s-27.9MB/s), io=765MiB (802MB), run=10003-10047msec 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:01.727 00:41:01.727 real 0m11.261s 00:41:01.727 user 0m40.083s 00:41:01.727 sys 0m3.054s 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:01.727 14:09:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:01.727 ************************************ 00:41:01.727 END TEST fio_dif_digest 00:41:01.727 ************************************ 00:41:01.727 14:09:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:01.727 14:09:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:01.727 14:09:14 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:01.728 rmmod nvme_tcp 00:41:01.728 rmmod nvme_fabrics 00:41:01.728 rmmod nvme_keyring 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1667429 ']' 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1667429 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1667429 ']' 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1667429 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1667429 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1667429' 00:41:01.728 killing process with pid 1667429 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1667429 00:41:01.728 14:09:14 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1667429 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:01.728 14:09:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:04.261 Waiting for block devices as requested 00:41:04.519 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:04.519 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:04.519 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:04.778 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:04.778 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:04.778 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:05.036 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:05.036 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:05.036 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:05.296 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:05.296 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:05.296 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:05.554 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:05.554 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:05.554 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:05.812 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:05.812 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:41:06.071 14:09:20 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:06.071 14:09:20 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:06.071 14:09:20 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:06.071 14:09:20 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:06.071 14:09:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:06.071 14:09:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:06.071 14:09:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:07.974 14:09:22 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:07.974 00:41:07.974 real 1m20.975s 00:41:07.974 user 7m33.321s 00:41:07.974 sys 0m34.473s 00:41:07.974 14:09:22 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:07.974 14:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.974 ************************************ 00:41:07.974 END TEST nvmf_dif 00:41:07.974 ************************************ 00:41:08.233 14:09:22 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:08.233 14:09:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:08.233 14:09:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:08.233 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:41:08.233 ************************************ 00:41:08.233 START TEST nvmf_abort_qd_sizes 00:41:08.233 ************************************ 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:08.233 * Looking for test storage... 00:41:08.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:41:08.233 14:09:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:18.207 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:18.207 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:18.207 Found net devices under 0000:af:00.0: cvl_0_0 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:18.207 Found net devices under 0000:af:00.1: cvl_0_1 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:18.207 14:09:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:18.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:18.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:41:18.207 00:41:18.207 --- 10.0.0.2 ping statistics --- 00:41:18.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.207 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:18.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:18.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:41:18.207 00:41:18.207 --- 10.0.0.1 ping statistics --- 00:41:18.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.207 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:18.207 14:09:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:21.493 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:21.493 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:22.871 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1686515 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1686515 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1686515 ']' 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:22.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:22.871 14:09:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:22.871 [2024-06-10 14:09:37.328291] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:41:22.871 [2024-06-10 14:09:37.328351] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.130 EAL: No free 2048 kB hugepages reported on node 1 00:41:23.130 [2024-06-10 14:09:37.455716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:23.130 [2024-06-10 14:09:37.543154] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.130 [2024-06-10 14:09:37.543199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.130 [2024-06-10 14:09:37.543213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:23.130 [2024-06-10 14:09:37.543225] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:23.130 [2024-06-10 14:09:37.543235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.130 [2024-06-10 14:09:37.543291] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.130 [2024-06-10 14:09:37.543383] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:41:23.130 [2024-06-10 14:09:37.543497] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.130 [2024-06-10 14:09:37.543496] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:24.068 14:09:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:24.068 ************************************ 00:41:24.068 START TEST spdk_target_abort 00:41:24.068 ************************************ 00:41:24.068 14:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:41:24.068 14:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:24.068 14:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:41:24.068 14:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.068 14:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:27.358 spdk_targetn1 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:27.358 [2024-06-10 14:09:41.160121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:27.358 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:27.359 [2024-06-10 14:09:41.196399] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:27.359 14:09:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:27.359 EAL: No free 2048 kB hugepages reported on node 1 00:41:30.677 Initializing NVMe Controllers 00:41:30.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:30.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:30.677 Initialization complete. Launching workers. 00:41:30.677 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9319, failed: 0 00:41:30.677 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1288, failed to submit 8031 00:41:30.677 success 803, unsuccess 485, failed 0 00:41:30.677 14:09:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:30.677 14:09:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:30.677 EAL: No free 2048 kB hugepages reported on node 1 00:41:33.293 Initializing NVMe Controllers 00:41:33.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:33.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:33.293 Initialization complete. Launching workers. 00:41:33.293 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8672, failed: 0 00:41:33.293 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 7398 00:41:33.293 success 334, unsuccess 940, failed 0 00:41:33.293 14:09:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:33.293 14:09:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:33.293 EAL: No free 2048 kB hugepages reported on node 1 00:41:36.663 Initializing NVMe Controllers 00:41:36.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:36.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:36.663 Initialization complete. Launching workers. 00:41:36.663 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36787, failed: 0 00:41:36.663 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2564, failed to submit 34223 00:41:36.663 success 624, unsuccess 1940, failed 0 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.663 14:09:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1686515 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1686515 ']' 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1686515 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686515 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686515' 00:41:38.566 killing process with pid 1686515 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1686515 00:41:38.566 14:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1686515 00:41:38.824 00:41:38.824 real 0m14.788s 00:41:38.824 user 0m58.468s 00:41:38.824 sys 0m2.736s 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:38.824 ************************************ 00:41:38.824 END TEST spdk_target_abort 00:41:38.824 ************************************ 00:41:38.824 14:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:38.824 14:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:38.824 14:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:38.824 14:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:38.824 ************************************ 00:41:38.824 START TEST kernel_target_abort 00:41:38.824 ************************************ 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # local ip 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip_candidates=() 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # local -A ip_candidates 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 nvmf_port=4420 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:38.824 14:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:43.040 Waiting for block devices as requested 00:41:43.040 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:43.040 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:43.040 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:43.040 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:43.299 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:43.299 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:43.299 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:43.299 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:43.558 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:43.558 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:43.558 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:43.818 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:43.818 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:43.818 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:44.077 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:44.077 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:44.077 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:44.336 No valid GPT data, bailing 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:44.336 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # echo SPDK-test 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo 1 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo 1 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # echo tcp 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # echo 4420 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # echo ipv4 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:44.337 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:41:44.596 00:41:44.596 Discovery Log Number of Records 2, Generation counter 2 00:41:44.596 =====Discovery Log Entry 0====== 00:41:44.597 trtype: tcp 00:41:44.597 adrfam: ipv4 00:41:44.597 subtype: current discovery subsystem 00:41:44.597 treq: not specified, sq flow control disable supported 00:41:44.597 portid: 1 00:41:44.597 trsvcid: 4420 00:41:44.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:44.597 traddr: 10.0.0.1 00:41:44.597 eflags: none 00:41:44.597 sectype: none 00:41:44.597 =====Discovery Log Entry 1====== 00:41:44.597 trtype: tcp 00:41:44.597 adrfam: ipv4 00:41:44.597 subtype: nvme subsystem 00:41:44.597 treq: not specified, sq flow control disable supported 00:41:44.597 portid: 1 00:41:44.597 trsvcid: 4420 00:41:44.597 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:44.597 traddr: 10.0.0.1 00:41:44.597 eflags: none 00:41:44.597 sectype: none 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:44.597 14:09:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:44.597 EAL: No free 2048 kB hugepages reported on node 1 00:41:47.886 Initializing NVMe Controllers 00:41:47.886 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:47.886 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:47.886 Initialization complete. Launching workers. 00:41:47.886 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50314, failed: 0 00:41:47.886 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50314, failed to submit 0 00:41:47.886 success 0, unsuccess 50314, failed 0 00:41:47.886 14:10:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:47.886 14:10:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:47.886 EAL: No free 2048 kB hugepages reported on node 1 00:41:51.172 Initializing NVMe Controllers 00:41:51.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:51.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:51.172 Initialization complete. Launching workers. 00:41:51.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87674, failed: 0 00:41:51.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22122, failed to submit 65552 00:41:51.172 success 0, unsuccess 22122, failed 0 00:41:51.172 14:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:51.172 14:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:51.172 EAL: No free 2048 kB hugepages reported on node 1 00:41:53.707 Initializing NVMe Controllers 00:41:53.707 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:53.707 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:53.707 Initialization complete. Launching workers. 00:41:53.707 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84396, failed: 0 00:41:53.707 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21098, failed to submit 63298 00:41:53.707 success 0, unsuccess 21098, failed 0 00:41:53.707 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:53.707 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:53.707 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 0 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:41:53.966 14:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:57.256 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:57.256 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:57.256 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:57.256 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:57.256 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:57.256 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:57.515 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:58.894 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:41:59.153 00:41:59.153 real 0m20.282s 00:41:59.153 user 0m8.237s 00:41:59.153 sys 0m6.882s 00:41:59.153 14:10:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:59.154 14:10:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.154 ************************************ 00:41:59.154 END TEST kernel_target_abort 00:41:59.154 ************************************ 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:59.154 rmmod nvme_tcp 00:41:59.154 rmmod nvme_fabrics 00:41:59.154 rmmod nvme_keyring 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1686515 ']' 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1686515 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1686515 ']' 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1686515 00:41:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1686515) - No such process 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1686515 is not found' 00:41:59.154 Process with pid 1686515 is not found 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:59.154 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:03.351 Waiting for block devices as requested 00:42:03.351 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:03.351 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:03.351 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:03.351 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:03.351 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:03.610 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:03.610 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:03.610 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:03.870 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:03.870 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:03.870 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:04.129 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:04.129 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:04.129 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:04.388 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:04.388 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:04.388 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:04.647 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:07.225 14:10:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:07.225 00:42:07.225 real 0m58.555s 00:42:07.225 user 1m12.648s 00:42:07.225 sys 0m22.419s 00:42:07.225 14:10:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:07.225 14:10:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:07.225 ************************************ 00:42:07.225 END TEST nvmf_abort_qd_sizes 00:42:07.225 ************************************ 00:42:07.225 14:10:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:07.225 14:10:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:07.225 14:10:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:07.225 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:42:07.225 ************************************ 00:42:07.225 START TEST keyring_file 00:42:07.225 ************************************ 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:07.225 * Looking for test storage... 00:42:07.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:07.225 14:10:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:07.225 14:10:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:07.225 14:10:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:07.225 14:10:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.225 14:10:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.225 14:10:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.225 14:10:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:07.225 14:10:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CSnr61RoiU 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@708 -- # local prefix key digest 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@710 -- # digest=0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@711 -- # python - 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CSnr61RoiU 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CSnr61RoiU 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CSnr61RoiU 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hiN32gNwX7 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@708 -- # local prefix key digest 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@710 -- # key=112233445566778899aabbccddeeff00 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@710 -- # digest=0 00:42:07.225 14:10:21 keyring_file -- nvmf/common.sh@711 -- # python - 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hiN32gNwX7 00:42:07.225 14:10:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hiN32gNwX7 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.hiN32gNwX7 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=1696467 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1696467 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1696467 ']' 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:07.225 14:10:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:07.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:07.225 14:10:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:07.226 [2024-06-10 14:10:21.457203] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:42:07.226 [2024-06-10 14:10:21.457270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696467 ] 00:42:07.226 EAL: No free 2048 kB hugepages reported on node 1 00:42:07.226 [2024-06-10 14:10:21.581566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.501 [2024-06-10 14:10:21.670240] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:42:08.081 14:10:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:08.081 [2024-06-10 14:10:22.347887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:08.081 null0 00:42:08.081 [2024-06-10 14:10:22.379926] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:08.081 [2024-06-10 14:10:22.380345] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:08.081 [2024-06-10 14:10:22.387945] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.081 14:10:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:08.081 [2024-06-10 14:10:22.399973] nvmf_rpc.c: 784:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:08.081 request: 00:42:08.081 { 00:42:08.081 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:08.081 "secure_channel": false, 00:42:08.081 "listen_address": { 00:42:08.081 "trtype": "tcp", 00:42:08.081 "traddr": "127.0.0.1", 00:42:08.081 "trsvcid": "4420" 00:42:08.081 }, 00:42:08.081 "method": "nvmf_subsystem_add_listener", 00:42:08.081 "req_id": 1 00:42:08.081 } 00:42:08.081 Got JSON-RPC error response 00:42:08.081 response: 00:42:08.081 { 00:42:08.081 "code": -32602, 00:42:08.081 "message": "Invalid parameters" 00:42:08.081 } 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:08.081 14:10:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=1696737 00:42:08.081 14:10:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1696737 /var/tmp/bperf.sock 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1696737 ']' 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:08.081 14:10:22 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:08.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:08.082 14:10:22 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:08.082 14:10:22 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:08.082 14:10:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:08.082 [2024-06-10 14:10:22.454768] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:42:08.082 [2024-06-10 14:10:22.454830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696737 ] 00:42:08.082 EAL: No free 2048 kB hugepages reported on node 1 00:42:08.341 [2024-06-10 14:10:22.563528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.341 [2024-06-10 14:10:22.649710] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.909 14:10:23 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:08.909 14:10:23 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:42:08.909 14:10:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:08.909 14:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:09.168 14:10:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hiN32gNwX7 00:42:09.168 14:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hiN32gNwX7 00:42:09.428 14:10:23 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:42:09.428 14:10:23 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:42:09.428 14:10:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:09.428 14:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.428 14:10:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:09.687 14:10:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.CSnr61RoiU == \/\t\m\p\/\t\m\p\.\C\S\n\r\6\1\R\o\i\U ]] 00:42:09.687 14:10:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:42:09.687 14:10:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:09.687 14:10:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:09.687 14:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.687 14:10:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:09.946 14:10:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.hiN32gNwX7 == \/\t\m\p\/\t\m\p\.\h\i\N\3\2\g\N\w\X\7 ]] 00:42:09.946 14:10:24 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:42:09.946 14:10:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:09.946 14:10:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:09.946 14:10:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:09.946 14:10:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:09.946 14:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.206 14:10:24 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:42:10.206 14:10:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:42:10.206 14:10:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:10.206 14:10:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:10.206 14:10:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.206 14:10:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:10.206 14:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.465 14:10:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:10.465 14:10:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:10.465 14:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:10.465 [2024-06-10 14:10:24.900513] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:10.724 nvme0n1 00:42:10.724 14:10:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:42:10.724 14:10:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:10.724 14:10:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:10.724 14:10:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.724 14:10:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:10.724 14:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.983 14:10:25 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:42:10.983 14:10:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:42:10.983 14:10:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:10.983 14:10:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:10.983 14:10:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.983 14:10:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.983 14:10:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:10.983 14:10:25 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:42:10.983 14:10:25 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:11.242 Running I/O for 1 seconds... 00:42:12.180 00:42:12.180 Latency(us) 00:42:12.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:12.180 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:12.180 nvme0n1 : 1.01 9539.95 37.27 0.00 0.00 13341.79 5819.60 21181.24 00:42:12.180 =================================================================================================================== 00:42:12.180 Total : 9539.95 37.27 0.00 0.00 13341.79 5819.60 21181.24 00:42:12.180 0 00:42:12.180 14:10:26 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:12.180 14:10:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:12.440 14:10:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:42:12.440 14:10:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:12.440 14:10:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:12.440 14:10:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:12.440 14:10:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:12.440 14:10:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:12.700 14:10:27 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:42:12.700 14:10:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:42:12.700 14:10:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:12.700 14:10:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:12.700 14:10:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:12.700 14:10:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:12.700 14:10:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:12.959 14:10:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:12.959 14:10:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:12.959 14:10:27 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:12.959 14:10:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:13.218 [2024-06-10 14:10:27.510614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:13.218 [2024-06-10 14:10:27.510892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1992fc0 (107): Transport endpoint is not connected 00:42:13.218 [2024-06-10 14:10:27.511886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1992fc0 (9): Bad file descriptor 00:42:13.218 [2024-06-10 14:10:27.512886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:13.218 [2024-06-10 14:10:27.512902] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:13.218 [2024-06-10 14:10:27.512915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:13.218 request: 00:42:13.218 { 00:42:13.218 "name": "nvme0", 00:42:13.218 "trtype": "tcp", 00:42:13.218 "traddr": "127.0.0.1", 00:42:13.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:13.218 "adrfam": "ipv4", 00:42:13.219 "trsvcid": "4420", 00:42:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:13.219 "psk": "key1", 00:42:13.219 "method": "bdev_nvme_attach_controller", 00:42:13.219 "req_id": 1 00:42:13.219 } 00:42:13.219 Got JSON-RPC error response 00:42:13.219 response: 00:42:13.219 { 00:42:13.219 "code": -5, 00:42:13.219 "message": "Input/output error" 00:42:13.219 } 00:42:13.219 14:10:27 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:42:13.219 14:10:27 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:13.219 14:10:27 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:13.219 14:10:27 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:13.219 14:10:27 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:42:13.219 14:10:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:13.219 14:10:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:13.219 14:10:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:13.219 14:10:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:13.219 14:10:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:13.478 14:10:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:42:13.478 14:10:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:42:13.478 14:10:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:13.478 14:10:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:13.478 14:10:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:13.478 14:10:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:13.478 14:10:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:13.737 14:10:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:13.737 14:10:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:42:13.737 14:10:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:13.996 14:10:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:42:13.996 14:10:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:13.996 14:10:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:42:13.996 14:10:28 keyring_file -- keyring/file.sh@77 -- # jq length 00:42:13.996 14:10:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.255 14:10:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:42:14.255 14:10:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.CSnr61RoiU 00:42:14.255 14:10:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:14.255 14:10:28 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:14.256 14:10:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:14.515 [2024-06-10 14:10:28.889772] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CSnr61RoiU': 0100660 00:42:14.515 [2024-06-10 14:10:28.889804] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:14.515 request: 00:42:14.515 { 00:42:14.515 "name": "key0", 00:42:14.515 "path": "/tmp/tmp.CSnr61RoiU", 00:42:14.515 "method": "keyring_file_add_key", 00:42:14.515 "req_id": 1 00:42:14.515 } 00:42:14.515 Got JSON-RPC error response 00:42:14.515 response: 00:42:14.515 { 00:42:14.515 "code": -1, 00:42:14.515 "message": "Operation not permitted" 00:42:14.515 } 00:42:14.515 14:10:28 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:42:14.515 14:10:28 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:14.515 14:10:28 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:14.515 14:10:28 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:14.515 14:10:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.CSnr61RoiU 00:42:14.515 14:10:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:14.515 14:10:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CSnr61RoiU 00:42:14.774 14:10:29 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.CSnr61RoiU 00:42:14.774 14:10:29 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:42:14.774 14:10:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:14.774 14:10:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:14.774 14:10:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:14.774 14:10:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:14.774 14:10:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:15.034 14:10:29 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:42:15.034 14:10:29 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:15.034 14:10:29 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:15.034 14:10:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:15.293 [2024-06-10 14:10:29.583614] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CSnr61RoiU': No such file or directory 00:42:15.293 [2024-06-10 14:10:29.583640] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:15.293 [2024-06-10 14:10:29.583671] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:15.293 [2024-06-10 14:10:29.583682] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:15.293 [2024-06-10 14:10:29.583693] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:15.293 request: 00:42:15.293 { 00:42:15.293 "name": "nvme0", 00:42:15.293 "trtype": "tcp", 00:42:15.293 "traddr": "127.0.0.1", 00:42:15.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:15.293 "adrfam": "ipv4", 00:42:15.293 "trsvcid": "4420", 00:42:15.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:15.293 "psk": "key0", 00:42:15.293 "method": "bdev_nvme_attach_controller", 00:42:15.293 "req_id": 1 00:42:15.293 } 00:42:15.293 Got JSON-RPC error response 00:42:15.293 response: 00:42:15.293 { 00:42:15.293 "code": -19, 00:42:15.293 "message": "No such device" 00:42:15.293 } 00:42:15.293 14:10:29 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:42:15.293 14:10:29 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:15.293 14:10:29 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:15.293 14:10:29 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:15.293 14:10:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:42:15.293 14:10:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:15.551 14:10:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7q44MgL9C1 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:15.551 14:10:29 keyring_file -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:15.551 14:10:29 keyring_file -- nvmf/common.sh@708 -- # local prefix key digest 00:42:15.551 14:10:29 keyring_file -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:42:15.551 14:10:29 keyring_file -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:42:15.551 14:10:29 keyring_file -- nvmf/common.sh@710 -- # digest=0 00:42:15.551 14:10:29 keyring_file -- nvmf/common.sh@711 -- # python - 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7q44MgL9C1 00:42:15.551 14:10:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7q44MgL9C1 00:42:15.552 14:10:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.7q44MgL9C1 00:42:15.552 14:10:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7q44MgL9C1 00:42:15.552 14:10:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7q44MgL9C1 00:42:15.811 14:10:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:15.811 14:10:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:16.072 nvme0n1 00:42:16.072 14:10:30 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:42:16.072 14:10:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:16.072 14:10:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.072 14:10:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.072 14:10:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.072 14:10:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.330 14:10:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:42:16.330 14:10:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:42:16.330 14:10:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:16.588 14:10:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:42:16.588 14:10:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:42:16.588 14:10:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.589 14:10:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.589 14:10:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.589 14:10:31 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:42:16.589 14:10:31 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:42:16.847 14:10:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:16.847 14:10:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:16.847 14:10:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.847 14:10:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.847 14:10:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.847 14:10:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:42:16.847 14:10:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:16.847 14:10:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:17.105 14:10:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:42:17.105 14:10:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:42:17.105 14:10:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.365 14:10:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:42:17.365 14:10:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7q44MgL9C1 00:42:17.365 14:10:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7q44MgL9C1 00:42:17.624 14:10:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hiN32gNwX7 00:42:17.624 14:10:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hiN32gNwX7 00:42:17.883 14:10:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:17.883 14:10:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:18.143 nvme0n1 00:42:18.143 14:10:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:42:18.143 14:10:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:18.402 14:10:32 keyring_file -- keyring/file.sh@112 -- # config='{ 00:42:18.402 "subsystems": [ 00:42:18.402 { 00:42:18.402 "subsystem": "keyring", 00:42:18.402 "config": [ 00:42:18.402 { 00:42:18.402 "method": "keyring_file_add_key", 00:42:18.402 "params": { 00:42:18.402 "name": "key0", 00:42:18.402 "path": "/tmp/tmp.7q44MgL9C1" 00:42:18.402 } 00:42:18.402 }, 00:42:18.403 { 00:42:18.403 "method": "keyring_file_add_key", 00:42:18.403 "params": { 00:42:18.403 "name": "key1", 00:42:18.403 "path": "/tmp/tmp.hiN32gNwX7" 00:42:18.403 } 00:42:18.403 } 00:42:18.403 ] 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "subsystem": "iobuf", 00:42:18.403 "config": [ 00:42:18.403 { 00:42:18.403 "method": "iobuf_set_options", 00:42:18.403 "params": { 00:42:18.403 "small_pool_count": 8192, 00:42:18.403 "large_pool_count": 1024, 00:42:18.403 "small_bufsize": 8192, 00:42:18.403 "large_bufsize": 135168 00:42:18.403 } 00:42:18.403 } 00:42:18.403 ] 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "subsystem": "sock", 00:42:18.403 "config": [ 00:42:18.403 { 00:42:18.403 "method": "sock_set_default_impl", 00:42:18.403 "params": { 00:42:18.403 "impl_name": "posix" 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "sock_impl_set_options", 00:42:18.403 "params": { 00:42:18.403 "impl_name": "ssl", 00:42:18.403 "recv_buf_size": 4096, 00:42:18.403 "send_buf_size": 4096, 00:42:18.403 "enable_recv_pipe": true, 00:42:18.403 "enable_quickack": false, 00:42:18.403 "enable_placement_id": 0, 00:42:18.403 "enable_zerocopy_send_server": true, 00:42:18.403 "enable_zerocopy_send_client": false, 00:42:18.403 "zerocopy_threshold": 0, 00:42:18.403 "tls_version": 0, 00:42:18.403 "enable_ktls": false, 00:42:18.403 "enable_new_session_tickets": true 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "sock_impl_set_options", 00:42:18.403 "params": { 00:42:18.403 "impl_name": "posix", 00:42:18.403 "recv_buf_size": 2097152, 00:42:18.403 "send_buf_size": 2097152, 00:42:18.403 "enable_recv_pipe": true, 00:42:18.403 "enable_quickack": false, 00:42:18.403 "enable_placement_id": 0, 00:42:18.403 "enable_zerocopy_send_server": true, 00:42:18.403 "enable_zerocopy_send_client": false, 00:42:18.403 "zerocopy_threshold": 0, 00:42:18.403 "tls_version": 0, 00:42:18.403 "enable_ktls": false, 00:42:18.403 "enable_new_session_tickets": false 00:42:18.403 } 00:42:18.403 } 00:42:18.403 ] 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "subsystem": "vmd", 00:42:18.403 "config": [] 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "subsystem": "accel", 00:42:18.403 "config": [ 00:42:18.403 { 00:42:18.403 "method": "accel_set_options", 00:42:18.403 "params": { 00:42:18.403 "small_cache_size": 128, 00:42:18.403 "large_cache_size": 16, 00:42:18.403 "task_count": 2048, 00:42:18.403 "sequence_count": 2048, 00:42:18.403 "buf_count": 2048 00:42:18.403 } 00:42:18.403 } 00:42:18.403 ] 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "subsystem": "bdev", 00:42:18.403 "config": [ 00:42:18.403 { 00:42:18.403 "method": "bdev_set_options", 00:42:18.403 "params": { 00:42:18.403 "bdev_io_pool_size": 65535, 00:42:18.403 "bdev_io_cache_size": 256, 00:42:18.403 "bdev_auto_examine": true, 00:42:18.403 "iobuf_small_cache_size": 128, 00:42:18.403 "iobuf_large_cache_size": 16 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "bdev_raid_set_options", 00:42:18.403 "params": { 00:42:18.403 "process_window_size_kb": 1024 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "bdev_iscsi_set_options", 00:42:18.403 "params": { 00:42:18.403 "timeout_sec": 30 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "bdev_nvme_set_options", 00:42:18.403 "params": { 00:42:18.403 "action_on_timeout": "none", 00:42:18.403 "timeout_us": 0, 00:42:18.403 "timeout_admin_us": 0, 00:42:18.403 "keep_alive_timeout_ms": 10000, 00:42:18.403 "arbitration_burst": 0, 00:42:18.403 "low_priority_weight": 0, 00:42:18.403 "medium_priority_weight": 0, 00:42:18.403 "high_priority_weight": 0, 00:42:18.403 "nvme_adminq_poll_period_us": 10000, 00:42:18.403 "nvme_ioq_poll_period_us": 0, 00:42:18.403 "io_queue_requests": 512, 00:42:18.403 "delay_cmd_submit": true, 00:42:18.403 "transport_retry_count": 4, 00:42:18.403 "bdev_retry_count": 3, 00:42:18.403 "transport_ack_timeout": 0, 00:42:18.403 "ctrlr_loss_timeout_sec": 0, 00:42:18.403 "reconnect_delay_sec": 0, 00:42:18.403 "fast_io_fail_timeout_sec": 0, 00:42:18.403 "disable_auto_failback": false, 00:42:18.403 "generate_uuids": false, 00:42:18.403 "transport_tos": 0, 00:42:18.403 "nvme_error_stat": false, 00:42:18.403 "rdma_srq_size": 0, 00:42:18.403 "io_path_stat": false, 00:42:18.403 "allow_accel_sequence": false, 00:42:18.403 "rdma_max_cq_size": 0, 00:42:18.403 "rdma_cm_event_timeout_ms": 0, 00:42:18.403 "dhchap_digests": [ 00:42:18.403 "sha256", 00:42:18.403 "sha384", 00:42:18.403 "sha512" 00:42:18.403 ], 00:42:18.403 "dhchap_dhgroups": [ 00:42:18.403 "null", 00:42:18.403 "ffdhe2048", 00:42:18.403 "ffdhe3072", 00:42:18.403 "ffdhe4096", 00:42:18.403 "ffdhe6144", 00:42:18.403 "ffdhe8192" 00:42:18.403 ] 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "bdev_nvme_attach_controller", 00:42:18.403 "params": { 00:42:18.403 "name": "nvme0", 00:42:18.403 "trtype": "TCP", 00:42:18.403 "adrfam": "IPv4", 00:42:18.403 "traddr": "127.0.0.1", 00:42:18.403 "trsvcid": "4420", 00:42:18.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.403 "prchk_reftag": false, 00:42:18.403 "prchk_guard": false, 00:42:18.403 "ctrlr_loss_timeout_sec": 0, 00:42:18.403 "reconnect_delay_sec": 0, 00:42:18.403 "fast_io_fail_timeout_sec": 0, 00:42:18.403 "psk": "key0", 00:42:18.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:18.403 "hdgst": false, 00:42:18.403 "ddgst": false 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "bdev_nvme_set_hotplug", 00:42:18.403 "params": { 00:42:18.403 "period_us": 100000, 00:42:18.403 "enable": false 00:42:18.403 } 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "method": "bdev_wait_for_examine" 00:42:18.403 } 00:42:18.403 ] 00:42:18.403 }, 00:42:18.403 { 00:42:18.403 "subsystem": "nbd", 00:42:18.403 "config": [] 00:42:18.403 } 00:42:18.403 ] 00:42:18.403 }' 00:42:18.403 14:10:32 keyring_file -- keyring/file.sh@114 -- # killprocess 1696737 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1696737 ']' 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1696737 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@954 -- # uname 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1696737 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1696737' 00:42:18.403 killing process with pid 1696737 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@968 -- # kill 1696737 00:42:18.403 Received shutdown signal, test time was about 1.000000 seconds 00:42:18.403 00:42:18.403 Latency(us) 00:42:18.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.403 =================================================================================================================== 00:42:18.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:18.403 14:10:32 keyring_file -- common/autotest_common.sh@973 -- # wait 1696737 00:42:18.664 14:10:33 keyring_file -- keyring/file.sh@117 -- # bperfpid=1698473 00:42:18.664 14:10:33 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1698473 /var/tmp/bperf.sock 00:42:18.664 14:10:33 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1698473 ']' 00:42:18.664 14:10:33 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:18.664 14:10:33 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:18.664 14:10:33 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:18.664 14:10:33 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:18.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:18.664 14:10:33 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:18.664 14:10:33 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:42:18.664 "subsystems": [ 00:42:18.664 { 00:42:18.664 "subsystem": "keyring", 00:42:18.664 "config": [ 00:42:18.664 { 00:42:18.664 "method": "keyring_file_add_key", 00:42:18.664 "params": { 00:42:18.664 "name": "key0", 00:42:18.664 "path": "/tmp/tmp.7q44MgL9C1" 00:42:18.664 } 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "method": "keyring_file_add_key", 00:42:18.664 "params": { 00:42:18.664 "name": "key1", 00:42:18.664 "path": "/tmp/tmp.hiN32gNwX7" 00:42:18.664 } 00:42:18.664 } 00:42:18.664 ] 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "subsystem": "iobuf", 00:42:18.664 "config": [ 00:42:18.664 { 00:42:18.664 "method": "iobuf_set_options", 00:42:18.664 "params": { 00:42:18.664 "small_pool_count": 8192, 00:42:18.664 "large_pool_count": 1024, 00:42:18.664 "small_bufsize": 8192, 00:42:18.664 "large_bufsize": 135168 00:42:18.664 } 00:42:18.664 } 00:42:18.664 ] 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "subsystem": "sock", 00:42:18.664 "config": [ 00:42:18.664 { 00:42:18.664 "method": "sock_set_default_impl", 00:42:18.664 "params": { 00:42:18.664 "impl_name": "posix" 00:42:18.664 } 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "method": "sock_impl_set_options", 00:42:18.664 "params": { 00:42:18.664 "impl_name": "ssl", 00:42:18.664 "recv_buf_size": 4096, 00:42:18.664 "send_buf_size": 4096, 00:42:18.664 "enable_recv_pipe": true, 00:42:18.664 "enable_quickack": false, 00:42:18.664 "enable_placement_id": 0, 00:42:18.664 "enable_zerocopy_send_server": true, 00:42:18.664 "enable_zerocopy_send_client": false, 00:42:18.664 "zerocopy_threshold": 0, 00:42:18.664 "tls_version": 0, 00:42:18.664 "enable_ktls": false, 00:42:18.664 "enable_new_session_tickets": true 00:42:18.664 } 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "method": "sock_impl_set_options", 00:42:18.664 "params": { 00:42:18.664 "impl_name": "posix", 00:42:18.664 "recv_buf_size": 2097152, 00:42:18.664 "send_buf_size": 2097152, 00:42:18.664 "enable_recv_pipe": true, 00:42:18.664 "enable_quickack": false, 00:42:18.664 "enable_placement_id": 0, 00:42:18.664 "enable_zerocopy_send_server": true, 00:42:18.664 "enable_zerocopy_send_client": false, 00:42:18.664 "zerocopy_threshold": 0, 00:42:18.664 "tls_version": 0, 00:42:18.664 "enable_ktls": false, 00:42:18.664 "enable_new_session_tickets": false 00:42:18.664 } 00:42:18.664 } 00:42:18.664 ] 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "subsystem": "vmd", 00:42:18.664 "config": [] 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "subsystem": "accel", 00:42:18.664 "config": [ 00:42:18.664 { 00:42:18.664 "method": "accel_set_options", 00:42:18.664 "params": { 00:42:18.664 "small_cache_size": 128, 00:42:18.664 "large_cache_size": 16, 00:42:18.664 "task_count": 2048, 00:42:18.664 "sequence_count": 2048, 00:42:18.664 "buf_count": 2048 00:42:18.664 } 00:42:18.664 } 00:42:18.664 ] 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "subsystem": "bdev", 00:42:18.664 "config": [ 00:42:18.664 { 00:42:18.664 "method": "bdev_set_options", 00:42:18.664 "params": { 00:42:18.664 "bdev_io_pool_size": 65535, 00:42:18.664 "bdev_io_cache_size": 256, 00:42:18.664 "bdev_auto_examine": true, 00:42:18.664 "iobuf_small_cache_size": 128, 00:42:18.664 "iobuf_large_cache_size": 16 00:42:18.664 } 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "method": "bdev_raid_set_options", 00:42:18.664 "params": { 00:42:18.664 "process_window_size_kb": 1024 00:42:18.664 } 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "method": "bdev_iscsi_set_options", 00:42:18.664 "params": { 00:42:18.664 "timeout_sec": 30 00:42:18.664 } 00:42:18.664 }, 00:42:18.664 { 00:42:18.664 "method": "bdev_nvme_set_options", 00:42:18.664 "params": { 00:42:18.664 "action_on_timeout": "none", 00:42:18.664 "timeout_us": 0, 00:42:18.664 "timeout_admin_us": 0, 00:42:18.664 "keep_alive_timeout_ms": 10000, 00:42:18.664 "arbitration_burst": 0, 00:42:18.664 "low_priority_weight": 0, 00:42:18.664 "medium_priority_weight": 0, 00:42:18.664 "high_priority_weight": 0, 00:42:18.664 "nvme_adminq_poll_period_us": 10000, 00:42:18.664 "nvme_ioq_poll_period_us": 0, 00:42:18.664 "io_queue_requests": 512, 00:42:18.664 "delay_cmd_submit": true, 00:42:18.664 "transport_retry_count": 4, 00:42:18.664 "bdev_retry_count": 3, 00:42:18.664 "transport_ack_timeout": 0, 00:42:18.664 "ctrlr_loss_timeout_sec": 0, 00:42:18.664 "reconnect_delay_sec": 0, 00:42:18.664 "fast_io_fail_timeout_sec": 0, 00:42:18.664 "disable_auto_failback": false, 00:42:18.664 "generate_uuids": false, 00:42:18.664 "transport_tos": 0, 00:42:18.664 "nvme_error_stat": false, 00:42:18.664 "rdma_srq_size": 0, 00:42:18.664 "io_path_stat": false, 00:42:18.664 "allow_accel_sequence": false, 00:42:18.664 "rdma_max_cq_size": 0, 00:42:18.664 "rdma_cm_event_timeout_ms": 0, 00:42:18.664 "dhchap_digests": [ 00:42:18.664 "sha256", 00:42:18.664 "sha384", 00:42:18.664 "sha512" 00:42:18.664 ], 00:42:18.665 "dhchap_dhgroups": [ 00:42:18.665 "null", 00:42:18.665 "ffdhe2048", 00:42:18.665 "ffdhe3072", 00:42:18.665 "ffdhe4096", 00:42:18.665 "ffdhe6144", 00:42:18.665 "ffdhe8192" 00:42:18.665 ] 00:42:18.665 } 00:42:18.665 }, 00:42:18.665 { 00:42:18.665 "method": "bdev_nvme_attach_controller", 00:42:18.665 "params": { 00:42:18.665 "name": "nvme0", 00:42:18.665 "trtype": "TCP", 00:42:18.665 "adrfam": "IPv4", 00:42:18.665 "traddr": "127.0.0.1", 00:42:18.665 "trsvcid": "4420", 00:42:18.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.665 "prchk_reftag": false, 00:42:18.665 "prchk_guard": false, 00:42:18.665 "ctrlr_loss_timeout_sec": 0, 00:42:18.665 "reconnect_delay_sec": 0, 00:42:18.665 "fast_io_fail_timeout_sec": 0, 00:42:18.665 "psk": "key0", 00:42:18.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:18.665 "hdgst": false, 00:42:18.665 "ddgst": false 00:42:18.665 } 00:42:18.665 }, 00:42:18.665 { 00:42:18.665 "method": "bdev_nvme_set_hotplug", 00:42:18.665 "params": { 00:42:18.665 "period_us": 100000, 00:42:18.665 "enable": false 00:42:18.665 } 00:42:18.665 }, 00:42:18.665 { 00:42:18.665 "method": "bdev_wait_for_examine" 00:42:18.665 } 00:42:18.665 ] 00:42:18.665 }, 00:42:18.665 { 00:42:18.665 "subsystem": "nbd", 00:42:18.665 "config": [] 00:42:18.665 } 00:42:18.665 ] 00:42:18.665 }' 00:42:18.665 14:10:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:18.665 [2024-06-10 14:10:33.100224] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:42:18.665 [2024-06-10 14:10:33.100289] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698473 ] 00:42:18.924 EAL: No free 2048 kB hugepages reported on node 1 00:42:18.924 [2024-06-10 14:10:33.210260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.924 [2024-06-10 14:10:33.287296] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:42:19.183 [2024-06-10 14:10:33.452217] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:19.752 14:10:33 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:19.752 14:10:33 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:42:19.752 14:10:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:42:19.752 14:10:33 keyring_file -- keyring/file.sh@120 -- # jq length 00:42:19.752 14:10:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.011 14:10:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:42:20.011 14:10:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:20.011 14:10:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:20.011 14:10:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.011 14:10:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:20.270 14:10:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:42:20.270 14:10:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:42:20.270 14:10:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:42:20.270 14:10:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:20.529 14:10:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:42:20.529 14:10:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:20.529 14:10:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.7q44MgL9C1 /tmp/tmp.hiN32gNwX7 00:42:20.529 14:10:34 keyring_file -- keyring/file.sh@20 -- # killprocess 1698473 00:42:20.529 14:10:34 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1698473 ']' 00:42:20.529 14:10:34 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1698473 00:42:20.529 14:10:34 keyring_file -- common/autotest_common.sh@954 -- # uname 00:42:20.529 14:10:34 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:20.530 14:10:34 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1698473 00:42:20.530 14:10:34 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:42:20.530 14:10:34 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:42:20.530 14:10:34 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1698473' 00:42:20.530 killing process with pid 1698473 00:42:20.530 14:10:34 keyring_file -- common/autotest_common.sh@968 -- # kill 1698473 00:42:20.530 Received shutdown signal, test time was about 1.000000 seconds 00:42:20.530 00:42:20.530 Latency(us) 00:42:20.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:20.530 =================================================================================================================== 00:42:20.530 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:20.530 14:10:34 keyring_file -- common/autotest_common.sh@973 -- # wait 1698473 00:42:20.789 14:10:35 keyring_file -- keyring/file.sh@21 -- # killprocess 1696467 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1696467 ']' 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1696467 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@954 -- # uname 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1696467 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1696467' 00:42:20.789 killing process with pid 1696467 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@968 -- # kill 1696467 00:42:20.789 [2024-06-10 14:10:35.239973] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:42:20.789 14:10:35 keyring_file -- common/autotest_common.sh@973 -- # wait 1696467 00:42:21.358 00:42:21.358 real 0m14.426s 00:42:21.358 user 0m34.219s 00:42:21.358 sys 0m3.951s 00:42:21.358 14:10:35 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:21.358 14:10:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:21.358 ************************************ 00:42:21.358 END TEST keyring_file 00:42:21.358 ************************************ 00:42:21.358 14:10:35 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:42:21.358 14:10:35 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:21.358 14:10:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:21.358 14:10:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:21.358 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:42:21.358 ************************************ 00:42:21.358 START TEST keyring_linux 00:42:21.358 ************************************ 00:42:21.358 14:10:35 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:21.358 * Looking for test storage... 00:42:21.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:21.358 14:10:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:21.358 14:10:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:21.358 14:10:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:21.358 14:10:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.358 14:10:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.358 14:10:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.358 14:10:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:21.358 14:10:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:21.358 14:10:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:21.358 14:10:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@708 -- # local prefix key digest 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@710 -- # digest=0 00:42:21.358 14:10:35 keyring_linux -- nvmf/common.sh@711 -- # python - 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:21.618 /tmp/:spdk-test:key0 00:42:21.618 14:10:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:21.618 14:10:35 keyring_linux -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:21.618 14:10:35 keyring_linux -- nvmf/common.sh@708 -- # local prefix key digest 00:42:21.618 14:10:35 keyring_linux -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:42:21.618 14:10:35 keyring_linux -- nvmf/common.sh@710 -- # key=112233445566778899aabbccddeeff00 00:42:21.618 14:10:35 keyring_linux -- nvmf/common.sh@710 -- # digest=0 00:42:21.618 14:10:35 keyring_linux -- nvmf/common.sh@711 -- # python - 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:21.618 14:10:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:21.618 /tmp/:spdk-test:key1 00:42:21.618 14:10:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1699090 00:42:21.618 14:10:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1699090 00:42:21.618 14:10:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:21.618 14:10:35 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1699090 ']' 00:42:21.618 14:10:35 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.618 14:10:35 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:21.618 14:10:35 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.618 14:10:35 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:21.618 14:10:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:21.618 [2024-06-10 14:10:35.965083] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:42:21.618 [2024-06-10 14:10:35.965150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699090 ] 00:42:21.618 EAL: No free 2048 kB hugepages reported on node 1 00:42:21.618 [2024-06-10 14:10:36.085924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.878 [2024-06-10 14:10:36.169610] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.446 14:10:36 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:22.446 14:10:36 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:42:22.446 14:10:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:22.446 14:10:36 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.446 14:10:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:22.446 [2024-06-10 14:10:36.870195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:22.446 null0 00:42:22.446 [2024-06-10 14:10:36.902232] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:22.446 [2024-06-10 14:10:36.902670] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.706 14:10:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:22.706 40433239 00:42:22.706 14:10:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:22.706 255527136 00:42:22.706 14:10:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1699332 00:42:22.706 14:10:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1699332 /var/tmp/bperf.sock 00:42:22.706 14:10:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1699332 ']' 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:22.706 14:10:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:22.706 [2024-06-10 14:10:36.978978] Starting SPDK v24.09-pre git sha1 c5b9f923d / DPDK 24.03.0 initialization... 00:42:22.706 [2024-06-10 14:10:36.979038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699332 ] 00:42:22.706 EAL: No free 2048 kB hugepages reported on node 1 00:42:22.706 [2024-06-10 14:10:37.088054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.706 [2024-06-10 14:10:37.174374] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.643 14:10:37 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:23.643 14:10:37 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:42:23.643 14:10:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:23.643 14:10:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:23.643 14:10:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:23.643 14:10:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:24.211 14:10:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:24.211 14:10:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:24.211 [2024-06-10 14:10:38.587145] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:24.211 nvme0n1 00:42:24.211 14:10:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:24.211 14:10:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:24.211 14:10:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:24.211 14:10:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:24.211 14:10:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:24.211 14:10:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.470 14:10:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:24.470 14:10:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:24.470 14:10:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:24.470 14:10:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:24.470 14:10:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.470 14:10:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.470 14:10:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@25 -- # sn=40433239 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 40433239 == \4\0\4\3\3\2\3\9 ]] 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 40433239 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:24.728 14:10:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:24.728 Running I/O for 1 seconds... 00:42:26.106 00:42:26.106 Latency(us) 00:42:26.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.106 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:26.106 nvme0n1 : 1.01 9606.95 37.53 0.00 0.00 13253.27 7497.32 21495.81 00:42:26.106 =================================================================================================================== 00:42:26.106 Total : 9606.95 37.53 0.00 0.00 13253.27 7497.32 21495.81 00:42:26.106 0 00:42:26.106 14:10:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:26.106 14:10:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:26.106 14:10:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:26.106 14:10:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:26.106 14:10:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:26.106 14:10:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:26.106 14:10:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.106 14:10:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:26.366 14:10:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:26.366 14:10:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:26.366 14:10:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:26.366 14:10:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:26.366 14:10:40 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:26.366 14:10:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:26.626 [2024-06-10 14:10:40.907820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:26.626 [2024-06-10 14:10:40.908054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdbf50 (107): Transport endpoint is not connected 00:42:26.626 [2024-06-10 14:10:40.909046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdbf50 (9): Bad file descriptor 00:42:26.626 [2024-06-10 14:10:40.910046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:26.626 [2024-06-10 14:10:40.910062] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:26.626 [2024-06-10 14:10:40.910073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:26.626 request: 00:42:26.626 { 00:42:26.626 "name": "nvme0", 00:42:26.626 "trtype": "tcp", 00:42:26.626 "traddr": "127.0.0.1", 00:42:26.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:26.626 "adrfam": "ipv4", 00:42:26.626 "trsvcid": "4420", 00:42:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:26.626 "psk": ":spdk-test:key1", 00:42:26.626 "method": "bdev_nvme_attach_controller", 00:42:26.626 "req_id": 1 00:42:26.626 } 00:42:26.626 Got JSON-RPC error response 00:42:26.626 response: 00:42:26.626 { 00:42:26.626 "code": -5, 00:42:26.626 "message": "Input/output error" 00:42:26.626 } 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@33 -- # sn=40433239 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 40433239 00:42:26.626 1 links removed 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@33 -- # sn=255527136 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 255527136 00:42:26.626 1 links removed 00:42:26.626 14:10:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1699332 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1699332 ']' 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1699332 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:26.626 14:10:40 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1699332 00:42:26.626 14:10:41 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:42:26.626 14:10:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:42:26.626 14:10:41 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1699332' 00:42:26.626 killing process with pid 1699332 00:42:26.626 14:10:41 keyring_linux -- common/autotest_common.sh@968 -- # kill 1699332 00:42:26.626 Received shutdown signal, test time was about 1.000000 seconds 00:42:26.626 00:42:26.626 Latency(us) 00:42:26.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.626 =================================================================================================================== 00:42:26.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:26.626 14:10:41 keyring_linux -- common/autotest_common.sh@973 -- # wait 1699332 00:42:26.885 14:10:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1699090 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1699090 ']' 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1699090 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1699090 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1699090' 00:42:26.885 killing process with pid 1699090 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@968 -- # kill 1699090 00:42:26.885 14:10:41 keyring_linux -- common/autotest_common.sh@973 -- # wait 1699090 00:42:27.145 00:42:27.145 real 0m5.930s 00:42:27.145 user 0m10.590s 00:42:27.145 sys 0m1.894s 00:42:27.145 14:10:41 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:27.145 14:10:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:27.145 ************************************ 00:42:27.145 END TEST keyring_linux 00:42:27.145 ************************************ 00:42:27.405 14:10:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:27.405 14:10:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:27.405 14:10:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:27.405 14:10:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:27.406 14:10:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:42:27.406 14:10:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:42:27.406 14:10:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:42:27.406 14:10:41 -- common/autotest_common.sh@723 -- # xtrace_disable 00:42:27.406 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:42:27.406 14:10:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:42:27.406 14:10:41 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:42:27.406 14:10:41 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:42:27.406 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:42:34.036 INFO: APP EXITING 00:42:34.036 INFO: killing all VMs 00:42:34.036 INFO: killing vhost app 00:42:34.036 WARN: no vhost pid file found 00:42:34.036 INFO: EXIT DONE 00:42:38.230 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:42:38.230 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:42:42.424 Cleaning 00:42:42.424 Removing: /var/run/dpdk/spdk0/config 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:42.424 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:42.424 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:42.424 Removing: /var/run/dpdk/spdk1/config 00:42:42.424 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:42.425 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:42.425 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:42.425 Removing: /var/run/dpdk/spdk1/mp_socket 00:42:42.425 Removing: /var/run/dpdk/spdk2/config 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:42.425 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:42.425 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:42.425 Removing: /var/run/dpdk/spdk3/config 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:42.425 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:42.425 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:42.425 Removing: /var/run/dpdk/spdk4/config 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:42.425 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:42.425 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:42.425 Removing: /dev/shm/bdev_svc_trace.1 00:42:42.425 Removing: /dev/shm/nvmf_trace.0 00:42:42.425 Removing: /dev/shm/spdk_tgt_trace.pid1217146 00:42:42.425 Removing: /var/run/dpdk/spdk0 00:42:42.425 Removing: /var/run/dpdk/spdk1 00:42:42.425 Removing: /var/run/dpdk/spdk2 00:42:42.425 Removing: /var/run/dpdk/spdk3 00:42:42.425 Removing: /var/run/dpdk/spdk4 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1214465 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1215712 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1217146 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1217730 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1218719 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1218998 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1220102 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1220125 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1220495 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1222864 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1224220 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1224539 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1224862 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1225249 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1225676 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1225891 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1226101 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1226421 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1227514 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1230863 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1231238 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1231544 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1231800 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1232375 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1232522 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1233147 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1233225 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1233521 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1233784 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1234078 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1234102 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1234717 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1235007 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1235334 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1235635 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1235663 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1235932 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1236189 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1236455 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1236721 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1236981 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1237238 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1237496 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1237748 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1238014 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1238301 00:42:42.425 Removing: /var/run/dpdk/spdk_pid1238584 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1238871 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1239156 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1239443 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1239722 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1240011 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1240293 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1240583 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1240871 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1241161 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1241445 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1241600 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1242089 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1246776 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1300138 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1305411 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1317372 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1323913 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1329017 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1329708 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1343707 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1343715 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1344536 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1345568 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1346380 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1346922 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1347079 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1347378 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1347452 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1347463 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1348462 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1349317 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1350149 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1350921 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1350968 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1351303 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1352635 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1353917 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1363972 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1364332 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1369637 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1376757 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1379744 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1392490 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1403507 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1405337 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1406259 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1427230 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1431986 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1463805 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1469618 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1471210 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1473150 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1473357 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1473629 00:42:42.685 Removing: /var/run/dpdk/spdk_pid1473896 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1474722 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1476600 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1477745 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1478322 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1480741 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1481543 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1482313 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1487441 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1505505 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1520027 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1524255 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1531411 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1532757 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1535067 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1540399 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1545589 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1555151 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1555163 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1560830 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1560974 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1561236 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1561714 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1561770 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1567312 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1567968 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1573505 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1576276 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1582953 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1590163 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1600422 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1608859 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1608880 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1631249 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1631851 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1632606 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1633337 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1634686 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1635594 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1636177 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1636976 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1642258 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1642530 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1649379 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1649691 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1651969 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1661024 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1661105 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1667736 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1669743 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1671751 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1672957 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1674968 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1676314 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1687253 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1687782 00:42:42.945 Removing: /var/run/dpdk/spdk_pid1688310 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1691047 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1691510 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1692030 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1696467 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1696737 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1698473 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1699090 00:42:43.205 Removing: /var/run/dpdk/spdk_pid1699332 00:42:43.205 Clean 00:42:43.205 14:10:57 -- common/autotest_common.sh@1450 -- # return 0 00:42:43.205 14:10:57 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:42:43.205 14:10:57 -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:43.205 14:10:57 -- common/autotest_common.sh@10 -- # set +x 00:42:43.205 14:10:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:42:43.205 14:10:57 -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:43.205 14:10:57 -- common/autotest_common.sh@10 -- # set +x 00:42:43.205 14:10:57 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:43.205 14:10:57 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:43.205 14:10:57 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:43.205 14:10:57 -- spdk/autotest.sh@391 -- # hash lcov 00:42:43.205 14:10:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:42:43.205 14:10:57 -- spdk/autotest.sh@393 -- # hostname 00:42:43.205 14:10:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:43.464 geninfo: WARNING: invalid characters removed from testname! 00:43:15.541 14:11:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:15.541 14:11:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:17.445 14:11:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:20.045 14:11:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:22.577 14:11:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:24.601 14:11:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:27.136 14:11:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:27.396 14:11:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:27.396 14:11:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:27.396 14:11:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:27.396 14:11:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:27.396 14:11:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.396 14:11:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.396 14:11:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.396 14:11:41 -- paths/export.sh@5 -- $ export PATH 00:43:27.396 14:11:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.396 14:11:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:27.396 14:11:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:43:27.396 14:11:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718021501.XXXXXX 00:43:27.396 14:11:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718021501.Yx8RT4 00:43:27.396 14:11:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:43:27.396 14:11:41 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:43:27.396 14:11:41 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:43:27.396 14:11:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:27.396 14:11:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:27.396 14:11:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:43:27.396 14:11:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:43:27.396 14:11:41 -- common/autotest_common.sh@10 -- $ set +x 00:43:27.396 14:11:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:43:27.396 14:11:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:43:27.396 14:11:41 -- pm/common@17 -- $ local monitor 00:43:27.396 14:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.396 14:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.396 14:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.396 14:11:41 -- pm/common@21 -- $ date +%s 00:43:27.396 14:11:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:27.396 14:11:41 -- pm/common@21 -- $ date +%s 00:43:27.396 14:11:41 -- pm/common@25 -- $ sleep 1 00:43:27.396 14:11:41 -- pm/common@21 -- $ date +%s 00:43:27.396 14:11:41 -- pm/common@21 -- $ date +%s 00:43:27.396 14:11:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718021501 00:43:27.396 14:11:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718021501 00:43:27.396 14:11:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718021501 00:43:27.396 14:11:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718021501 00:43:27.396 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718021501_collect-vmstat.pm.log 00:43:27.396 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718021501_collect-cpu-load.pm.log 00:43:27.396 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718021501_collect-cpu-temp.pm.log 00:43:27.396 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718021501_collect-bmc-pm.bmc.pm.log 00:43:28.335 14:11:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:43:28.336 14:11:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:43:28.336 14:11:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:28.336 14:11:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:43:28.336 14:11:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:43:28.336 14:11:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:43:28.336 14:11:42 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:28.336 14:11:42 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:43:28.336 14:11:42 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:28.336 14:11:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:43:28.336 14:11:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:28.336 14:11:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:28.336 14:11:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:28.336 14:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.336 14:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:28.336 14:11:42 -- pm/common@44 -- $ pid=1714916 00:43:28.336 14:11:42 -- pm/common@50 -- $ kill -TERM 1714916 00:43:28.336 14:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.336 14:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:28.336 14:11:42 -- pm/common@44 -- $ pid=1714918 00:43:28.336 14:11:42 -- pm/common@50 -- $ kill -TERM 1714918 00:43:28.336 14:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.336 14:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:28.336 14:11:42 -- pm/common@44 -- $ pid=1714919 00:43:28.336 14:11:42 -- pm/common@50 -- $ kill -TERM 1714919 00:43:28.336 14:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.336 14:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:28.336 14:11:42 -- pm/common@44 -- $ pid=1714942 00:43:28.336 14:11:42 -- pm/common@50 -- $ sudo -E kill -TERM 1714942 00:43:28.336 + [[ -n 1093668 ]] 00:43:28.336 + sudo kill 1093668 00:43:28.605 [Pipeline] } 00:43:28.623 [Pipeline] // stage 00:43:28.629 [Pipeline] } 00:43:28.648 [Pipeline] // timeout 00:43:28.654 [Pipeline] } 00:43:28.672 [Pipeline] // catchError 00:43:28.678 [Pipeline] } 00:43:28.696 [Pipeline] // wrap 00:43:28.703 [Pipeline] } 00:43:28.719 [Pipeline] // catchError 00:43:28.729 [Pipeline] stage 00:43:28.731 [Pipeline] { (Epilogue) 00:43:28.747 [Pipeline] catchError 00:43:28.749 [Pipeline] { 00:43:28.764 [Pipeline] echo 00:43:28.766 Cleanup processes 00:43:28.772 [Pipeline] sh 00:43:29.059 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.059 1715022 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:29.059 1715365 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.074 [Pipeline] sh 00:43:29.359 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.359 ++ grep -v 'sudo pgrep' 00:43:29.359 ++ awk '{print $1}' 00:43:29.359 + sudo kill -9 1715022 00:43:29.372 [Pipeline] sh 00:43:29.658 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:29.658 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:43:37.781 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:43:43.068 [Pipeline] sh 00:43:43.352 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:43.352 Artifacts sizes are good 00:43:43.367 [Pipeline] archiveArtifacts 00:43:43.375 Archiving artifacts 00:43:43.559 [Pipeline] sh 00:43:43.882 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:43.900 [Pipeline] cleanWs 00:43:43.912 [WS-CLEANUP] Deleting project workspace... 00:43:43.913 [WS-CLEANUP] Deferred wipeout is used... 00:43:43.919 [WS-CLEANUP] done 00:43:43.921 [Pipeline] } 00:43:43.939 [Pipeline] // catchError 00:43:43.951 [Pipeline] sh 00:43:44.234 + logger -p user.info -t JENKINS-CI 00:43:44.243 [Pipeline] } 00:43:44.260 [Pipeline] // stage 00:43:44.266 [Pipeline] } 00:43:44.283 [Pipeline] // node 00:43:44.289 [Pipeline] End of Pipeline 00:43:44.326 Finished: SUCCESS